Shakeout: A New Regularized Deep Neural Network Training Scheme
MNIST database
Regularization
Dropout (neural networks)
Deep Neural Networks
Normalization
Statistic
DOI:
10.1609/aaai.v30i1.10202
Publication Date:
2022-06-24T05:57:51Z
AUTHORS (3)
ABSTRACT
Recent years have witnessed the success of deep neural networks in dealing with a plenty practical problems. The invention effective training techniques largely contributes to this success. so-called "Dropout" scheme is one most powerful tool reduce over-fitting. From statistic point view, Dropout works by implicitly imposing an L2 regularizer on weights. In paper, we present new scheme: Shakeout. Instead randomly discarding units as does at stage, our method chooses enhance or inverse contributions each unit next layer. We show that leads combination L1 regularization and imposed weights, which has been proved Elastic Net models practice.We empirically evaluated Shakeout demonstrated sparse network weights are obtained via training. Our classification experiments real-life image datasets MNIST CIFAR-10 deals over-fitting effectively.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (22)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....