EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images
Benchmark (surveying)
Modality (human–computer interaction)
Table (database)
Health records
DOI:
10.48550/arxiv.2310.18652
Publication Date:
2023-01-01
AUTHORS (11)
ABSTRACT
Electronic Health Records (EHRs), which contain patients' medical histories in various multi-modal formats, often overlook the potential for joint reasoning across imaging and table modalities underexplored current EHR Question Answering (QA) systems. In this paper, we introduce EHRXQA, a novel question answering dataset combining structured EHRs chest X-ray images. To develop our dataset, first construct two uni-modal resources: 1) The MIMIC-CXR-VQA newly created visual (VQA) benchmark, specifically designed to augment modality QA, 2) EHRSQL (MIMIC-IV), refashioned version of previously established table-based QA dataset. By integrating these resources, successfully that necessitates both cross-modal reasoning. address unique challenges questions within EHRs, propose NeuralSQL-based strategy equipped with an external VQA API. This pioneering endeavor enhances engagement sources believe can catalyze advances real-world scenarios such as clinical decision-making research. EHRXQA is available at https://github.com/baeseongsu/ehrxqa.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....