Shapley value: from cooperative game to explainable artificial intelligence
Shapley Value
Value (mathematics)
DOI:
10.1007/s43684-023-00060-8
Publication Date:
2024-02-09T09:02:46Z
AUTHORS (4)
ABSTRACT
Abstract With the tremendous success of machine learning (ML), concerns about their black-box nature have grown. The issue interpretability affects trust in ML systems and raises ethical such as algorithmic bias. In recent years, feature attribution explanation method based on Shapley value has become mainstream explainable artificial intelligence approach for explaining models. This paper provides a comprehensive overview value-based methods. We begin by outlining foundational theory rooted cooperative game discussing its desirable properties. To enhance comprehension aid identifying relevant algorithms, we propose classification framework existing methods from three dimensions: type, replacement method, approximation method. Furthermore, emphasize practical application at different stages model development, encompassing pre-modeling, modeling, post-modeling phases. Finally, this work summarizes limitations associated with discusses potential directions future research.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (59)
CITATIONS (29)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....