A Survey of Bias in Machine Learning Through the Prism of Statistical Parity

Fairness 330 [STAT.ML]Statistics [stat]/Machine Learning [stat.ML] [MATH.MATH-ST]Mathematics [math]/Statistics [math.ST] Disparate impact 4. Education Machine learning [INFO.INFO-IM]Computer Science [cs]/Medical Imaging Tutorial 10. No inequality [SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing
DOI: 10.1080/00031305.2021.1952897 Publication Date: 2021-07-13T20:33:15Z
ABSTRACT
Applications based on machine learning models have now become an indispensable part of the everyday life and the professional world. As a consequence, a critical question has recently arose among the population: Do algorithmic decisions convey any type of discrimination against specific groups of population or minorities? In this article, we show the importance of understanding how bias can be introduced into automatic decisions. We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting. We then propose to quantify the presence of bias by using the standard disparate impact index on the real and well-known adult income dataset. Finally, we check the performance of different approaches aiming to reduce the bias in binary classification outcomes. Importantly, we show that some intuitive methods are ineffective with respect to the statistical parity criterion. This sheds light on the fact that trying to make fair machine learning models may be a particularly challenging task, in particular when the training observations contain some bias.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (36)
CITATIONS (25)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....