Interpretation of multi-task clearance models from molecular images supported by experimental design
Interpretability
Robustness
DOI:
10.1016/j.ailsci.2022.100048
Publication Date:
2022-12-02T03:12:11Z
AUTHORS (4)
ABSTRACT
Recent methodological advances in deep learning (DL) architectures have not only improved the performance of predictive models but also enhanced their interpretability potential, thus considerably increasing transparency. In context medicinal chemistry, potential to accurately predict molecular properties, chemically interpret them, would be strongly preferred. Previously, we developed accurate multi-task convolutional neural network (CNN) and graph (GCNN) a set diverse intrinsic metabolic clearance parameters from image- graph-based representations, respectively. Herein, introduce several model frameworks answer whether explanations obtained CNN GCNN could applied chemical transformations associated with experimentally confirmed products. We show strong correlation between pixel intensities corresponding predictions, as well robustness different orientations. Using actual case examples, demonstrate that both interpretations frequently complement each other, suggesting high for combined use guiding chemistry design.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (41)
CITATIONS (3)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....