GDRL: An interpretable framework for thoracic pathologic prediction
Interpretability
Representation
Feature Learning
Feature (linguistics)
DOI:
10.1016/j.patrec.2022.12.020
Publication Date:
2022-12-24T16:39:49Z
AUTHORS (6)
ABSTRACT
Deep learning methods have shown significant performance in medical image analysis tasks. However, they generally act like “black box” without explanations in both feature extraction and decision processes, leading to lack of clinical insights and high risk assessments. To aid deep learning in envisioning diseases with visual clues, we propose a novel Group-Disentangled Representation Learning framework (GDRL). The key contribution is that GDRL completely disentangles latent space into disease concepts with abundant and non-overlapping feature related explanations, thus enhancing interpretability in feature extraction and decision processes. Furthermore, we introduce an implicit group-swap structure by emphasizing the linking relationship between semantical concepts of disease and low-level visual features, other than explicit explanations on general objects and their attributes. We demonstrate our framework on predicting four categories of diseases from chest X-ray images. The AUROC of GDRL on ChestX-ray14 for thoracic pathologic prediction are 0.8630, 0.8980, 0.9269 and 0.8653 respectively, and we showcase the potential of our framework in enhancing interpretability of the factors contributing to different diseases.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (21)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....