The loss surface of deep and wide neural networks
FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Artificial Intelligence
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Neural and Evolutionary Computing
Machine Learning (stat.ML)
02 engineering and technology
Machine Learning (cs.LG)
Artificial Intelligence (cs.AI)
Statistics - Machine Learning
0202 electrical engineering, electronic engineering, information engineering
Neural and Evolutionary Computing (cs.NE)
DOI:
10.48550/arxiv.1704.08045
Publication Date:
2017-01-01
AUTHORS (2)
ABSTRACT
While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points. It has been argued that this is the case as all local minima are close to being globally optimal. We show that this is (almost) true, in fact almost all local minima are globally optimal, for a fully connected network with squared loss and analytic activation function given that the number of hidden units of one layer of the network is larger than the number of training points and the network structure from this layer on is pyramidal.<br/>ICML 2017. Main results now hold for larger classes of loss functions<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....