The Kolmogorov–Arnold representation theorem revisited
Representation
Smoothness
Kolmogorov structure function
DOI:
10.1016/j.neunet.2021.01.020
Publication Date:
2021-01-29T06:30:59Z
AUTHORS (1)
ABSTRACT
There is a longstanding debate whether the Kolmogorov–Arnold representation theorem can explain use of more than one hidden layer in neural networks. The decomposes multivariate function into an interior and outer therefore has indeed similar structure as network with two layers. But there are distinctive differences. One main obstacles that depends on represented be wildly varying even if smooth. We derive modifications transfer smoothness properties to well approximated by ReLU It appears instead layers, natural interpretation deep where most layers required approximate function.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (42)
CITATIONS (96)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....