Explainable Artificial Intelligence improves human decision-making: Results from a mushroom picking experiment at a public art festival

conceptual replication AIvisual explanation mushroom identification trust calibration
DOI: 10.31219/osf.io/68emr Publication Date: 2022-09-22T05:09:17Z
ABSTRACT
Explainable Artificial Intelligence (XAI) enables an Artificial Intelligence (AI) to explain its decisions. This holds the promise of making AI more understandable to users, improving interaction, and establishing an adequate level of trust. We tested this assertion in the high-risk task of mushroom hunting, where users have to decide whether a mushroom is edible or poisonous with the aid of an AI-based app that suggests classifications based on mushroom images. In a between-subjects experiment N = 328 visitors of an Austrian media art exhibition played a mushroom hunting game on tablet computers while walking through a highly immersive artificial indoor forest. One group saw the AI's decisions only, while a second group additionally received attribution-based and example-based visual explanations of the AI's recommendation. The results show that participants with visual explanations outperformed participants without explanations in correct edibility assessments and pick-up decisions. This exhibition-based experiment thus replicated the decision-making results of a previous online study. However, unlike in the previous study, the visual explanations did not affect levels of trust or acceptance measures. In a direct comparison, we consequently discuss the findings in terms of generalizability. Besides the scientific contribution, we discuss the direct impact of conducting XAI experiments in highly immersive art- and game-based environments in exhibition contexts on visitors and local communities by triggering reflection and awareness for psychological issues of human-AI interaction.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)