Double Sparsity Kernel Learning with Automatic Variable Selection and Data Extraction

Overfitting Kernel (algebra)
DOI: 10.48550/arxiv.1706.01426 Publication Date: 2017-01-01
ABSTRACT
Learning with Reproducing Kernel Hilbert Spaces (RKHS) has been widely used in many scientific disciplines. Because a RKHS can be very flexible, it is common to impose regularization term the optimization prevent overfitting. Standard learning employs squared norm penalty of function. Despite its success, challenges remain. In particular, one cannot directly use for variable selection or data extraction. Therefore, when there exists noise predictors, underlying function sparse representation dual space, performance standard suboptimal. literature,work proposed on how perform learning, and sparsity constraint was considered However, learn both extraction simultaneously remains unclear. this paper, we propose unified method, namely, DOuble Sparsity (DOSK) overcome challenge. An efficient algorithm provided solve corresponding problem. We prove that under certain conditions, our new method asymptotically achieve consistency. Simulated real results demonstrate DOSK highly competitive among existing approaches learning.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....