- Explainable Artificial Intelligence (XAI)
- AI in cancer detection
- Advanced Neural Network Applications
- Radiomics and Machine Learning in Medical Imaging
- Machine Learning in Healthcare
- EEG and Brain-Computer Interfaces
- Artificial Intelligence in Healthcare and Education
- Digital Radiography and Breast Imaging
- Adversarial Robustness in Machine Learning
- Machine Learning and Data Classification
- Business Process Modeling and Analysis
- Problem and Project Based Learning
- Epilepsy research and treatment
- Cardiomyopathy and Myosin Studies
- Nutrition and Health in Aging
- Diabetes Management and Education
- Scientific Computing and Data Management
- Neonatal and fetal brain pathology
- Global Cancer Incidence and Screening
- Neural Networks and Applications
- Pancreatic function and diabetes
- Reconstructive Surgery and Microvascular Techniques
- ECG Monitoring and Analysis
- Innovative Teaching and Learning Methods
- Cardiac electrophysiology and arrhythmias
Harvard University
2022-2024
Emory University
2024
Duke University
2018-2024
Beth Israel Deaconess Medical Center
2022
Massachusetts General Hospital
2022
When we are faced with challenging image classification tasks, often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each classes helps us make final decision. In this work, introduce a deep network architecture -- part (ProtoPNet), that reasons in similar way: dissects finding parts, combines from prototypes to classification. model thus way is qualitatively ornithologists, physicians, others would...
We present a deformable prototypical part network (Deformable ProtoPNet), an interpretable image classifier that integrates the power of deep learning and interpretability case-based reasoning. This model classifies input images by comparing them with prototypes learned during training, yielding explanations in form "this looks like that." However, while previous methods use spatially rigid prototypes, we address this shortcoming proposing flexible prototypes. Each prototype is made up...
Background Mirai, a state-of-the-art deep learning–based algorithm for predicting short-term breast cancer risk, outperforms standard clinical risk models. However, Mirai is black box, risking overreliance on the and incorrect diagnoses. Purpose To identify whether bilateral dissimilarity underpins Mirai's reasoning process; create simplified, intelligible model, AsymMirai, using dissimilarity; determine if AsymMirai may approximate performance in 1–5-year prediction. Materials Methods This...
In intensive care units (ICUs), critically ill patients are monitored with electroencephalography (EEG) to prevent serious brain injury. EEG monitoring is constrained by clinician availability, and interpretation can be subjective prone interobserver variability. Automated deep-learning systems for could reduce human bias accelerate the diagnostic process. However, existing uninterpretable (black-box) models untrustworthy, difficult troubleshoot, lack accountability in real-world...
Interpretability in machine learning models is important high-stakes decisions, such as whether to order a biopsy based on mammographic exam. Mammography poses challenges that are not present other computer vision tasks: datasets small, confounding information present, and it can be difficult even for radiologist decide between watchful waiting mammogram alone. In this work, we framework interpretable learning-based mammography. addition predicting lesion malignant or benign, our work aims...
There is increasing interest in using deep learning and computer vision to help guide clinical decisions, such as whether order a biopsy based on mammogram. Existing networks are typically black box, unable explain how they make their predictions. We present an interpretable deep-learning network which explains its predictions terms of BI-RADS features mass shape margin. Our model predicts margin shape, then uses the logits from those models predict malignancy, also model. The prototypical...
Tools for computer-aided diagnosis based on deep learning have become increasingly important in the medical field. Such tools can be useful, but require effective communication of their decision-making process order to safely and meaningfully guide clinical decisions. Inherently interpretable models provide an explanation each decision that matches internal process. We present a user interface incorporates Interpretable AI Algorithm Breast Lesions (IAIA-BL) model, which interpretably...
In intensive care units (ICUs), critically ill patients are monitored with electroencephalograms (EEGs) to prevent serious brain injury. The number of who can be is constrained by the availability trained physicians read EEGs, and EEG interpretation subjective prone inter-observer variability. Automated deep learning systems for could reduce human bias accelerate diagnostic process. However, black box models untrustworthy, difficult troubleshoot, lack accountability in real-world...
Digital mammography is essential to breast cancer detection, and deep learning offers promising tools for faster more accurate mammogram analysis. In radiology other high-stakes environments, uninterpretable ("black box") models are unsuitable there a call in these fields make interpretable models. Recent work computer vision provides transparency formerly black boxes by utilizing prototypes case-based explanations, achieving high accuracy applications including mammography. However,...
Prototypical-part models are a popular interpretable alternative to black-box deep learning for computer vision. However, they difficult train, with high sensitivity hyperparameter tuning, inhibiting their application new datasets and our understanding of which methods truly improve performance. To facilitate the careful study prototypical-part networks (ProtoPNets), we create framework integrating components -- ProtoPNeXt. Using ProtoPNeXt, show that applying Bayesian tuning an angular...
When we deploy machine learning models in high-stakes medical settings, must ensure these make accurate predictions that are consistent with known science. Inherently interpretable networks address this need by explaining the rationale behind each decision while maintaining equal or higher accuracy compared to black-box models. In work, present a novel neural network algorithm uses case-based reasoning for mammography. Designed aid radiologist their decisions, our presents both prediction of...
We present a deformable prototypical part network (Deformable ProtoPNet), an interpretable image classifier that integrates the power of deep learning and interpretability case-based reasoning. This model classifies input images by comparing them with prototypes learned during training, yielding explanations in form "this looks like that." However, while previous methods use spatially rigid prototypes, we address this shortcoming proposing flexible prototypes. Each prototype is made up...
In electroencephalogram (EEG) recordings, the presence of interictal epileptiform discharges (IEDs) serves as a critical biomarker for seizures or seizure-like events.Detecting IEDs can be difficult; even highly trained experts disagree on same sample. As result, specialists have turned to machine-learning models assistance. However, many existing are black boxes and do not provide any human-interpretable reasoning their decisions. high-stakes medical applications, it is interpretable so...