Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly
I.2
FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Computers and Society
I.2.6
Computers and Society (cs.CY)
K.4.2
I.2.0
68T01, 68T09, 68T20
Machine Learning (cs.LG)
I.2; I.2.6; I.2.0; K.4.2
DOI:
10.48550/arxiv.2405.09251
Publication Date:
2024-05-15
AUTHORS (2)
ABSTRACT
Providing various machine learning (ML) applications in the real world, concerns about discrimination hidden ML models are growing, particularly high-stakes domains. Existing techniques for assessing level of include commonly used group and individual fairness measures. However, these two types measures usually hard to be compatible with each other, even different might incompatible as well. To address this issue, we investigate evaluate classifiers from a manifold perspective propose "harmonic measure via manifolds (HFM)" based on distances between sets. Yet direct calculation too expensive afford, reducing its practical applicability. Therefore, devise an approximation algorithm named "Approximation distance sets (ApproxDist)" facilitate accurate estimation distances, further demonstrate algorithmic effectiveness under certain reasonable assumptions. Empirical results indicate that proposed HFM is valid ApproxDist effective efficient.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....