Group Fairness via Group Consensus

DOI: 10.1145/3630106.3659006 Publication Date: 2024-06-05T13:14:21Z
ABSTRACT
Ensuring equitable impact of machine learning models across different societal groups is utmost importance for real-world applications. Prior research in fairness has predominantly focused on adjusting model outputs through pre-processing, in-processing, or post-processing techniques. These techniques focus correcting bias either the data model. However, we argue that and should be addressed conjunction. To achieve this, propose an algorithm called GroupDebias to reduce unfairness a model-guided fashion, thereby enabling exhibit more behavior. Even though it model-aware, core idea independent architecture, making versatile effective approach can broadly applied various domains types. Our method focuses systematically addressing biases present training itself by adaptively dropping samples increase Theoretically, proposed enjoys guaranteed improvement demographic parity at expense bounded reduction balanced accuracy. A comprehensive evaluation extensive experiments diverse datasets demonstrates consistently significantly outperforms existing enhancement techniques, achieving substantial with minimal performance.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (39)
CITATIONS (2)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....