Multi-Label Learning with Stronger Consistency Guarantees

FOS: Computer and information sciences Computer Science - Machine Learning Statistics - Machine Learning Machine Learning (stat.ML) Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2407.13746 Publication Date: 2024-07-18
ABSTRACT
We present a detailed study of surrogate losses and algorithms for multi-label learning, supported by $H$-consistency bounds. first show that, the simplest form loss (the popular Hamming loss), well-known consistent binary relevance suffers from sub-optimal dependency on number labels in terms bounds, when using smooth such as logistic losses. Furthermore, this function fails to account label correlations. To address these drawbacks, we introduce novel loss, that accounts correlations benefits label-independent then broaden our analysis cover more extensive family losses, including all common ones new extension defined based linear-fractional functions with respect confusion matrix. also extend comprehensive comp-sum adapting standard classification learning. prove thus Bayes-consistency, across any general loss. Our work proposes unified framework benefiting strong consistency guarantees significantly expanding upon previous which only established Bayes-consistency specific functions. Additionally, adapt constrained similar way, benefit bounds further describe efficient gradient computation minimizing
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....