Quantitative evaluation of explainable graph neural networks for molecular property prediction
Interpretability
Complement
DOI:
10.1016/j.patter.2022.100628
Publication Date:
2022-11-10T15:42:48Z
AUTHORS (4)
ABSTRACT
Graph neural networks (GNNs) have received increasing attention because of their expressive power on topological data, but they are still criticized for lack interpretability. To interpret GNN models, explainable artificial intelligence (XAI) methods been developed. However, these limited to qualitative analyses without quantitative assessments from the real-world datasets due a ground truths. In this study, we established five XAI-specific molecular property benchmarks, including two synthetic and three experimental datasets. Through datasets, quantitatively assessed six XAI four models made comparisons with seven medicinal chemists different experience levels. The results demonstrated that could deliver reliable informative answers in identifying key substructures. Moreover, identified substructures were shown complement existing classical fingerprints improve predictions, improvements increased growth training data.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (54)
CITATIONS (33)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....