Arbitrariness and Social Prediction: The Confounding Role of Variance in Fair Classification

Arbitrariness
DOI: 10.1609/aaai.v38i20.30203 Publication Date: 2024-03-25T12:35:51Z
ABSTRACT
Variance in predictions across different trained models is a significant, under-explored source of error fair binary classification. In practice, the variance on some data examples so large that decisions can be effectively arbitrary. To investigate this problem, we take an experimental approach and make four overarching contributions. We: 1) Define metric called self-consistency, derived from variance, which use as proxy for measuring reducing arbitrariness; 2) Develop ensembling algorithm abstains classification when prediction would arbitrary; 3) Conduct largest to-date empirical study role (vis-a-vis self-consistency arbitrariness) classification; and, 4) Release toolkit makes US Home Mortgage Disclosure Act (HMDA) datasets easily usable future research. Altogether, our experiments reveal shocking insights about reliability conclusions benchmark datasets. Most benchmarks are close-to-fair taking into account amount arbitrariness present -- before even try to apply any fairness interventions. This finding calls question practical utility common algorithmic methods, turn suggests should reconsider how choose measure
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (7)