Data-centric Prediction Explanation via Kernelized Stein Discrepancy

FOS: Computer and information sciences Computer Science - Machine Learning Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2403.15576 Publication Date: 2024-03-22
ABSTRACT
Existing example-based prediction explanation methods often bridge test and training data points through the model's parameters or latent representations. While these offer clues to causes of model predictions, they exhibit innate shortcomings, such as incurring significant computational overhead producing coarse-grained explanations. This paper presents a Highly-precise Data-centric Explanation (HD-Explain), straightforward method exploiting properties Kernelized Stein Discrepancy (KSD). Specifically, KSD uniquely defines parameterized kernel function for trained that encodes model-dependent correlation. By leveraging function, one can identify samples provide best predictive support point efficiently. We conducted thorough analyses experiments across multiple classification domains, where we show HD-Explain outperforms existing from various aspects, including 1) preciseness (fine-grained explanation), 2) consistency, 3) computation efficiency, leading surprisingly simple, effective, robust solution.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....