Calibration and Consistency of Adversarial Surrogate Losses

FOS: Computer and information sciences Computer Science - Machine Learning Statistics - Machine Learning Machine Learning (stat.ML) 01 natural sciences 0105 earth and related environmental sciences Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2104.09658 Publication Date: 2021-01-01
ABSTRACT
Adversarial robustness is an increasingly critical property of classifiers in applications. The design robust algorithms relies on surrogate losses since the optimization adversarial loss with most hypothesis sets NP-hard. But which should be used and when do they benefit from theoretical guarantees? We present extensive study this question, including a detailed analysis H-calibration H-consistency losses. show that, under some general assumptions, convex functions, or supremum-based often applications, are not H-calibrated for important such as generalized linear models one-layer neural networks. then give characterization prove that indeed loss, these sets. Next, we sufficient to guarantee consistency absence any distributional assumption, no continuous consistent setting. This, particular, proves claim presented COLT 2020 publication inaccurate. (Calibration results there correct modulo subtle definition differences, but does hold.) identify natural conditions describe detail H-consistent also report series empirical simulated data, many H-consistent, validate our assumptions.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....