Approximation and Non-parametric Estimation of ResNet-type Convolutional Neural Networks

FOS: Computer and information sciences Computer Science - Machine Learning G.3 Mathematics - Statistics Theory Machine Learning (stat.ML) 02 engineering and technology Statistics Theory (math.ST) Machine Learning (cs.LG) Statistics - Machine Learning 62G08 0202 electrical engineering, electronic engineering, information engineering FOS: Mathematics
DOI: 10.48550/arxiv.1903.10047 Publication Date: 2019-01-01
ABSTRACT
Convolutional neural networks (CNNs) have been shown to achieve optimal approximation and estimation error rates (in minimax sense) in several function classes. However, previous analyzed CNNs are unrealistically wide difficult obtain via optimization due sparse constraints important classes, including the H\"older class. We show a ResNet-type CNN can attain these classes more plausible situations -- it be dense, its width, channel size, filter size constant with respect sample size. The key idea is that we replicate learning ability of Fully-connected (FNNs) by tailored CNNs, as long FNNs \textit{block-sparse} structures. Our theory general sense automatically translate any rate achieved block-sparse into CNNs. As an application, derive aformentioned type for Barron same strategy.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....