Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making
FOS: Computer and information sciences
info:eu-repo/classification/ddc/330
330
ddc:330
Economics
algorithmic decision-making
Computer Science - Artificial Intelligence
05 social sciences
Computer Science - Human-Computer Interaction
fairness
006
and Obscurity of AI Algorithms
Human-Computer Interaction (cs.HC)
perceptions
Artificial Intelligence (cs.AI)
explanations
0502 economics and business
study
Accountability
Evaluation
DOI:
10.5445/ir/1000144756
Publication Date:
2022-01-01
AUTHORS (3)
ABSTRACT
Automated decision systems (ADS) have become ubiquitous in many high-stakes domains. Those systems typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly for affected individuals. As a result, ADS are prone to deficient oversight and calibration, which can lead to undesirable (e.g., unfair) outcomes. In this work, we conduct an online study with 200 participants to examine people's perceptions of fairness and trustworthiness towards ADS in comparison to a scenario where a human instead of an ADS makes a high-stakes decision -- and we provide thorough identical explanations regarding decisions in both cases. Surprisingly, we find that people perceive ADS as fairer than human decision-makers. Our analyses also suggest that people's AI literacy affects their perceptions, indicating that people with higher AI literacy favor ADS more strongly over human decision-makers, whereas low-AI-literacy people exhibit no significant differences in their perceptions.<br/>Hawaii International Conference on System Sciences 2022 (HICSS-55)<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....