Realizable $H$-Consistent and Bayes-Consistent Loss Functions for Learning to Defer
FOS: Computer and information sciences
Computer Science - Machine Learning
Statistics - Machine Learning
Machine Learning (stat.ML)
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2407.13732
Publication Date:
2024-07-18
AUTHORS (3)
ABSTRACT
We present a comprehensive study of surrogate loss functions for learning to defer. introduce broad family losses, parameterized by non-increasing function $\Psi$, and establish their realizable $H$-consistency under mild conditions. For cost based on classification error, we further show that these losses admit bounds when the hypothesis set is symmetric complete, property satisfied common neural network linear sets. Our results also resolve an open question raised in previous work (Mozannar et al., 2023) proving Bayes-consistency specific loss. Furthermore, identify choices $\Psi$ lead $H$-consistent any general function, thus achieving Bayes-consistency, $H$-consistency, simultaneously. investigate relationship between defer, highlighting key differences from standard classification. Finally, empirically evaluate our proposed compare them with existing baselines.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....