Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation
Relevance
Feature (linguistics)
Baseline (sea)
DOI:
10.48550/arxiv.2302.03578
Publication Date:
2023-01-01
AUTHORS (4)
ABSTRACT
Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of human-defined concepts, before using this predict final classification. We might therefore expect CBMs capable predicting concepts based on distinct regions an input. In doing so, would support human interpretation when generating explanations the model's outputs visualise input features corresponding concepts. The contribution paper is threefold: Firstly, we expand existing literature by looking at relevance both from concept vector, confirming that distributed among features, and classification where, for most part, made predicted as present. Secondly, report quantitative evaluation measure distance between maximum feature ground truth location; perform with techniques, Layer-wise Relevance Propagation (LRP), Integrated Gradients (IG) baseline gradient approach, finding LRP has lower average than IG. Thirdly, propose proportion measurement explaining importance.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....