On the Relationship Between Explanations, Fairness Perceptions, and Decisions

FOS: Computer and information sciences info:eu-repo/classification/ddc/330 330 ddc:330 Economics Computer Science - Artificial Intelligence 05 social sciences Computer Science - Human-Computer Interaction 02 engineering and technology Human-Computer Interaction (cs.HC) Artificial Intelligence (cs.AI) 0202 electrical engineering, electronic engineering, information engineering 0501 psychology and cognitive sciences
DOI: 10.48550/arxiv.2204.13156 Publication Date: 2022-01-01
ABSTRACT
It is known that recommendations of AI-based systems can be incorrect or unfair. Hence, it is often proposed that a human be the final decision-maker. Prior work has argued that explanations are an essential pathway to help human decision-makers enhance decision quality and mitigate bias, i.e., facilitate human-AI complementarity. For these benefits to materialize, explanations should enable humans to appropriately rely on AI recommendations and override the algorithmic recommendation when necessary to increase distributive fairness of decisions. The literature, however, does not provide conclusive empirical evidence as to whether explanations enable such complementarity in practice. In this work, we (a) provide a conceptual framework to articulate the relationships between explanations, fairness perceptions, reliance, and distributive fairness, (b) apply it to understand (seemingly) contradictory research findings at the intersection of explanations and fairness, and (c) derive cohesive implications for the formulation of research questions and the design of experiments.<br/>ACM CHI 2022 Workshop on Human-Centered Explainable AI (HCXAI), May 12--13, 2022, New Orleans, LA, USA<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....