Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation

Supervisor Mental model
DOI: 10.1609/icaps.v28i1.13930 Publication Date: 2022-09-27T20:01:12Z
ABSTRACT
Model reconciliation has been proposed as a way for an agent to explain its decisions human who may have different understanding of the same planning problem by explaining in terms these model differences.However, often human's mental (and hence difference) is not known precisely and such explanations cannot be readily computed.In this paper, we show how explanation generation process evolves presence uncertainty or incompleteness generating {\em conformant explanations} that are applicable set possible models.We also can contain superfluous informationand redundancies reduced using conditional iterate with attain common ground. Finally, will introduce anytime version approach empirically demonstrate trade-offs involved forms computational overhead communication human.We illustrate concepts three well-known domains well demonstration on robot typical search reconnaissance scenario external supervisor.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (13)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....