Hardness of Learning Neural Networks under the Manifold Hypothesis

Manifold (fluid mechanics)
DOI: 10.48550/arxiv.2406.01461 Publication Date: 2024-06-03
ABSTRACT
The manifold hypothesis presumes that high-dimensional data lies on or near a low-dimensional manifold. While the utility of encoding geometric structure has been demonstrated empirically, rigorous analysis its impact learnability neural networks is largely missing. Several recent results have established hardness for learning feedforward and equivariant under i.i.d. Gaussian uniform Boolean distributions. In this paper, we investigate hypothesis. We ask which minimal assumptions curvature regularity manifold, if any, render problem efficiently learnable. prove hard input manifolds bounded by extending proofs in SQ cryptographic settings inputs to setting. On other hand, show additional volume alleviate these fundamental limitations guarantee via simple interpolation argument. Notable instances regime are can be reliably reconstructed learning. Looking forward, comment empirically explore intermediate regimes manifolds, heterogeneous features commonly found real world data.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....