Prompting Large Language Models for Zero-Shot Clinical Prediction with Structured Longitudinal Electronic Health Record Data

Zero (linguistics) Electronic health record Health records Longitudinal data Longitudinal Study
DOI: 10.48550/arxiv.2402.01713 Publication Date: 2024-01-25
ABSTRACT
The inherent complexity of structured longitudinal Electronic Health Records (EHR) data poses a significant challenge when integrated with Large Language Models (LLMs), which are traditionally tailored for natural language processing. Motivated by the urgent need swift decision-making during new disease outbreaks, where traditional predictive models often fail due to lack historical data, this research investigates adaptability LLMs, like GPT-4, EHR data. We particularly focus on their zero-shot capabilities, enable them make predictions in scenarios they haven't been explicitly trained. In response longitudinal, sparse, and knowledge-infused nature our prompting approach involves taking into account specific characteristics such as units reference ranges, employing an in-context learning strategy that aligns clinical contexts. Our comprehensive experiments MIMIC-IV TJH datasets demonstrate elaborately designed framework, LLMs can improve prediction performance key tasks mortality, length-of-stay, 30-day readmission about 35\%, surpassing ML few-shot settings. underscores potential enhancing decision-making, especially healthcare situations outbreak emerging diseases no labeled code is publicly available at https://github.com/yhzhu99/llm4healthcare reproducibility.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....