Prediction rigidities for data-driven chemistry

Interpretability Transferability Leverage (statistics) Robustness Training set Formalism (music) Experimental data
DOI: 10.1039/d4fd00101j Publication Date: 2024-08-23T14:21:26Z
ABSTRACT
The widespread application of machine learning (ML) to the chemical sciences is making it very important understand how ML models learn correlate structures with their properties, and what can be done improve training efficiency whilst guaranteeing interpretability transferability. In this work, we demonstrate wide utility prediction rigidities, a family metrics derived from loss function, in understanding robustness model predictions. We show that rigidities allow assessment not only at global level, but also on local or component-wise level which intermediate (e.g. atomic, body-ordered, range-separated) predictions are made. leverage these behavior different models, guide efficient dataset construction for training. finally implement formalism targeting coarse-grained system applicability an even broader class atomistic modeling problems.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (83)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....