How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair

Consumer Decision Making 0502 economics and business 05 social sciences Social and Personality Psychology 0501 psychology and cognitive sciences Psychology, other Attitudes and Persuasion Engineering Psychology Social and Behavioral Sciences Consumer Psychology
DOI: 10.31234/osf.io/234f5 Publication Date: 2020-10-12T06:41:59Z
ABSTRACT
Trust is essential in individuals' perception, behavior, and evaluation of intelligent agents. Indeed, it is the primary motive for people to accept new technology. Thus, it is crucial to repair trust in the event when it is damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is humanlike compared to machine-like based on two seemingly competing frameworks of the CASA (Computers-Are-Social-Actors) paradigm and automation bias. A 2 (agent: Human-like vs. Machine-like) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 193) in the context of the stock market. Participants were presented with a scenario in which they were supposed to make investment choices with the help of an artificial intelligence agent's advice. To see the trajectory of initial trust-building, trust violation, and trust repair process, we designed an investment game that consists of 5 rounds of 8 investment choices (in total, 40 investment choices). The results show that trust was repaired more efficiently when a human-like agent apologizes with internal compared to external attribution. However, the opposite pattern was observed among participants who had machine-like agents; the external compared to internal attribution condition showed better trust repair. Both theoretical and practical implications are discussed.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (3)