On Coresets for Regularized Loss Minimization
Minification
Margin (machine learning)
Sample (material)
DOI:
10.48550/arxiv.1905.10845
Publication Date:
2019-01-01
AUTHORS (5)
ABSTRACT
We design and mathematically analyze sampling-based algorithms for regularized loss minimization problems that are implementable in popular computational models large data, which the access to data is restricted some way. Our main result if regularizer's effect does not become negligible as norm of hypothesis scales, then a uniform sample modest size with high probability coreset. In case function either logistic regression or soft-margin support vector machines, regularizer one common recommended choices, this implies $O(d \sqrt{n})$ coreset $n$ points $\Re^d$. contrast upper bound two lower bounds. The first shows our analysis sampling tight; is, smaller will likely be core set. second sense close optimal, significantly sets do generally exist.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....