Practical Approaches for Fair Learning with Multitype and Multivariate Sensitive Attributes

Categorical variable Kernel (algebra) Multiple kernel learning Predictive power Binary classification
DOI: 10.48550/arxiv.2211.06138 Publication Date: 2022-01-01
ABSTRACT
It is important to guarantee that machine learning algorithms deployed in the real world do not result unfairness or unintended social consequences. Fair ML has largely focused on protection of single attributes simpler setting where both and target outcomes are binary. However, practical application many a real-world problem entails simultaneous multiple sensitive attributes, which often simply binary, but continuous categorical. To address this more challenging task, we introduce FairCOCCO, fairness measure built cross-covariance operators reproducing kernel Hilbert Spaces. This leads two tools: first, FairCOCCO Score, normalised metric can quantify settings with arbitrary type; second, subsequent regularisation term be incorporated into objectives obtain fair predictors. These contributions crucial gaps algorithmic literature, empirically demonstrate consistent improvements against state-of-the-art techniques balancing predictive power datasets.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....