Communicating intent to develop shared situation awareness and engender trust in human-agent teams

0209 industrial biotechnology 02 engineering and technology
DOI: 10.1016/j.cogsys.2017.02.002 Publication Date: 2017-02-22T00:34:14Z
ABSTRACT
Abstract This paper addresses issues related to integrating autonomy-enabled, intelligent agents into collaborative, human-machine teams. Interaction with intelligent machine agents capable of making independent, goal-directed decisions in human-machine teaming operations constitutes a major change from traditional human-machine interaction involving teleoperation. Communicating the machine agent’s intent to human counterparts becomes increasingly important as independent machine decisions become subject to human trust and mental models. The authors present findings from their research that suggest existing user display technologies, tailored with context-specific information and the human’s knowledge level of the machine agent’s decision process, can mitigate misperceptions of the appropriateness of agent behavioral responses. This is important because misperceptions on the part of human team members increases the likelihood of trust degradation and unnecessary interventions, ultimately leading to disuse of the agent. Examples of possible issues associated with communicating agent intent, as well as potential implications for trust calibration are provided.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (69)
CITATIONS (94)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....