Evaluating Model Performance with Hard-Swish Activation Function Adjustments

FOS: Computer and information sciences Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition
DOI: 10.48550/arxiv.2410.06879 Publication Date: 2024-10-09
ABSTRACT
In the field of pattern recognition, achieving high accuracy is essential. While training a model to recognize different complex images, it vital fine-tune achieve highest possible. One strategy for fine-tuning involves changing its activation function. Most pre-trained models use ReLU as their default function, but switching function like Hard-Swish could be beneficial. This study evaluates performance using ReLU, Swish and functions across diverse image datasets. Our results show 2.06% increase in on CIFAR-10 dataset 0.30% ATLAS dataset. Modifying architecture lead improved overall accuracy.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....