Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing

Unintended consequences Spillover effect Affect Black box
DOI: 10.1287/isre.2023.1199 Publication Date: 2023-03-03T12:43:59Z
ABSTRACT
Although future regulations increasingly advocate that AI applications must be interpretable by users, we know little about how such explainability can affect human information processing. By conducting two experimental studies, help to fill this gap. We show explanations pave the way for systems reshape users' understanding of world around them. Specifically, state-of-the-art methods evoke mental model adjustments are subject confirmation bias, allowing misconceptions and errors persist even accumulate. Moreover, create spillover effects alter behavior in related but distinct domains where they do not have access an system. These risk manipulating user behavior, promoting discriminatory biases, biasing decision making. The reported findings serve as a warning indiscriminate use modern isolated measure address systems' black-box problems lead unintended, unforeseen because it creates new channel through which influence various domains.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (67)
CITATIONS (73)