Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation

Parity (physics) Limiting Compromise
DOI: 10.1609/aaai.v35i9.16931 Publication Date: 2022-09-08T19:07:18Z
ABSTRACT
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups downstream applications. A naive solution to transform the data so that it statistically independent of group membership, but this may throw away too much information when a reasonable compromise fairness and accuracy desired. Another common approach limit ability particular adversary who seeks maximize parity. Unfortunately, representations produced by adversarial approaches still retain biases as their efficacy tied complexity used during training. To end, we theoretically establish limiting mutual protected attributes, can assuredly control parity any classifier. We demonstrate an effective method controlling through based on contrastive estimators show they outperform rely variational bounds complex generative models. test our UCI Adult Heritage Health provides more informative across range desired thresholds while providing strong theoretical guarantees algorithm.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (11)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....