Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks

Initialization Stochastic Gradient Descent Normalization Memory footprint Deep Neural Networks Gradient boosting
DOI: 10.48550/arxiv.1905.11286 Publication Date: 2019-01-01
ABSTRACT
We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise normalization and decoupled weight decay. In our experiments on neural networks for image classification, speech recognition, machine translation, language modeling, it performs par or better than well tuned SGD momentum Adam AdamW. Additionally, NovoGrad (1) is robust to the choice of learning rate initialization, (2) works in a large batch setting, (3) has two times smaller memory footprint Adam.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....