A Study on Fairness and Trust Perceptions in Automated Decision Making
FOS: Computer and information sciences
info:eu-repo/classification/ddc/330
Fairness
330
ddc:330
Economics
Computer Science - Artificial Intelligence
Explanation
Computer Science - Human-Computer Interaction
02 engineering and technology
Trust
Transparency
Human-Computer Interaction (cs.HC)
Machine Learning
Artificial Intelligence (cs.AI)
0202 electrical engineering, electronic engineering, information engineering
Automated Decision Making
DOI:
10.5445/ir/1000130551
Publication Date:
2021-01-01
AUTHORS (3)
ABSTRACT
Automated decision systems are increasingly used for consequential decision making -- for a variety of reasons. These systems often rely on sophisticated yet opaque models, which do not (or hardly) allow for understanding how or why a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield undesirable (e.g., unfair) outcomes because their sanity is difficult to assess and calibrate in the first place. In this work, we conduct a study to evaluate different attempts of explaining such systems with respect to their effect on people's perceptions of fairness and trustworthiness towards the underlying mechanisms. A pilot study revealed surprising qualitative insights as well as preliminary significant effects, which will have to be verified, extended and thoroughly discussed in the larger main study.<br/>Joint Proceedings of the ACM IUI 2021 Workshops, April 13--17, 2021, College Station, USA<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....