Adriel Saporta

ORCID: 0000-0002-8726-2278
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Machine Learning in Healthcare
  • Natural Language Processing Techniques
  • Radiomics and Machine Learning in Medical Imaging
  • Statistical Methods and Inference
  • Machine Learning and Data Classification
  • Biomedical Text Mining and Ontologies
  • COVID-19 diagnosis using AI
  • Interpreting and Communication in Healthcare
  • Explainable Artificial Intelligence (XAI)
  • Statistical Methods and Bayesian Inference
  • Advanced X-ray and CT Imaging
  • Intelligent Tutoring Systems and Adaptive Learning
  • AI in cancer detection
  • Clinical Reasoning and Diagnostic Skills

Apple (United Kingdom)
2022

New York University
2022

Stanford University
2021

Abstract Saliency methods, which produce heat maps that highlight the areas of medical image influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation accuracy and reliability these strategies is necessary before they integrated into clinical setting. In this work, we quantitatively evaluate seven saliency including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish...

10.1038/s42256-022-00536-x article EN cc-by Nature Machine Intelligence 2022-10-10

Extracting structured clinical information from free-text radiology reports can enable the use of report for a variety critical healthcare applications. In our work, we present RadGraph, dataset entities and relations in full-text chest X-ray based on novel extraction schema designed to structure reports. We release development dataset, which contains board-certified radiologist annotations 500 MIMIC-CXR (14,579 10,889 relations), test two independent sets 100 split equally across CheXpert...

10.48550/arxiv.2106.14463 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Abstract Saliency methods, which “explain” deep neural networks by producing heat maps that highlight the areas of medical image influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. Although many saliency methods have been proposed for imaging interpretation, rigorous investigation accuracy and reliability these strategies is necessary before they integrated into clinical setting. In this work, we quantitatively evaluate seven...

10.1101/2021.02.28.21252634 preprint EN cc-by-nc-nd medRxiv (Cold Spring Harbor Laboratory) 2021-03-02

Recent advances in Natural Language Processing (NLP), and specifically automated Question Answering (QA) systems, have demonstrated both impressive linguistic fluency a pernicious tendency to reflect social biases. In this study, we introduce Q-Pain, dataset for assessing bias medical QA the context of pain management, one most challenging forms clinical decision-making. Along with dataset, propose new, rigorous framework, including sample experimental design, measure potential biases...

10.48550/arxiv.2108.01764 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Extracting structured clinical information from free-text radiology reports can enable the use of report for a variety critical healthcare applications. In our work, we present RadGraph, dataset entities and relations in full-text chest X-ray based on novel extraction schema designed to structure reports. We release development dataset, which contains board-certified radiologist annotations 500 MIMIC-CXR (14,579 10,889 relations), test two independent sets 100 split equally across CheXpert...

10.13026/hm87-5p47 preprint EN arXiv (Cornell University) 2021-06-28

Contrastive learning methods, such as CLIP, leverage naturally paired data-for example, images and their corresponding text captions-to learn general representations that transfer efficiently to downstream tasks. While approaches are generally applied two modalities, domains robotics, healthcare, video need support many types of data at once. We show the pairwise application CLIP fails capture joint information between thereby limiting quality learned representations. To address this issue,...

10.48550/arxiv.2411.01053 preprint EN arXiv (Cornell University) 2024-11-01

Feature attribution methods identify which features of an input most influence a model's output. Most widely-used feature (such as SHAP, LIME, and Grad-CAM) are "class-dependent" in that they generate vector function class. In this work, we demonstrate class-dependent can "leak" information about the selected class, making class appear more likely than it is. Thus, end user runs risk drawing false conclusions when interpreting explanation generated by method. contrast, introduce...

10.48550/arxiv.2302.12893 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Spurious correlations allow flexible models to predict well during training but poorly on related test distributions. Recent work has shown that satisfy particular independencies involving correlation-inducing \textit{nuisance} variables have guarantees their performance. Enforcing such requires nuisances be observed training. However, nuisances, as demographics or image background labels, are often missing. independence just the data does not imply entire population. Here we derive...

10.48550/arxiv.2112.00881 preprint EN cc-by arXiv (Cornell University) 2021-01-01
Coming Soon ...