Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms

Regularization
DOI: 10.48550/arxiv.2405.14778 Publication Date: 2024-05-23
ABSTRACT
We study theoretical properties of a broad class regularized algorithms with vector-valued output. These spectral include kernel ridge regression, principal component various implementations gradient descent and many more. Our contributions are twofold. First, we rigorously confirm the so-called saturation effect for regression output by deriving novel lower bound on learning rates; this is shown to be suboptimal when smoothness function exceeds certain level. Second, present upper finite sample risk general algorithms, applicable both well-specified misspecified scenarios (where true lies outside hypothesis space) which minimax optimal in regimes. All our results explicitly allow case infinite-dimensional variables, proving consistency recent practical applications.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....