Proximal regularization for online and batch learning

Regularization
DOI: 10.1145/1553374.1553407 Publication Date: 2009-06-16T13:34:36Z
ABSTRACT
Many learning algorithms rely on the curvature (in particular, strong convexity) of regularized objective functions to provide good theoretical performance guarantees. In practice, choice regularization penalty that gives best testing set may result in with little or even no curvature. these cases, designed specifically for objectives often either fail completely require some modification involves a substantial compromise performance.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (14)
CITATIONS (16)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....