Contrastive explanation: a structural-model approach

Casual Causal model
DOI: 10.1017/s0269888921000102 Publication Date: 2021-10-20T22:43:35Z
ABSTRACT
This paper presents a model of contrastive explanation using structural casual models. The topic causal in artificial intelligence has gathered interest recent years as researchers and practitioners aim to increase trust understanding intelligent decision-making. While different sub-fields have looked into this problem with sub-field-specific view, there are few models that capture more generally. One general is based on It defines an fact that, if found be true, would constitute actual cause specific event. However, research philosophy social sciences shows explanations contrastive: is, when people ask for event -- the they (sometimes implicitly) asking relative some contrast case; "Why P rather than Q?". In paper, we extend approach define two complementary notions explanation, demonstrate them classical problems intelligence: classification planning. We believe can help subfields better understand explanation.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (39)
CITATIONS (57)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....