Augmenting Black-box LLMs with Medical Textbooks for Clinical Question Answering
Boosting
DOI:
10.48550/arxiv.2309.02233
Publication Date:
2023-01-01
AUTHORS (3)
ABSTRACT
Large-scale language models (LLMs) like ChatGPT have demonstrated impressive abilities in generating responses based on human instructions. However, their use the medical field can be challenging due to lack of specific, in-depth knowledge. In this study, we present a system called LLMs Augmented with Medical Textbooks (LLM-AMT) designed enhance proficiency specialized domains. LLM-AMT integrates authoritative textbooks into LLMs' framework using plug-and-play modules. These modules include Query Augmenter, Hybrid Textbook Retriever, and Knowledge Self-Refiner. Together, they incorporate Additionally, an LLM Reader aids contextual understanding. Our experimental results three QA tasks demonstrate that LLMAMT significantly improves response quality, accuracy gains ranging from 11.6% 16.6%. Notably, GPT-4-Turbo as base model, outperforms Med-PaLM 2 model pre-trained massive amount corpus by 2-3%. We found despite being 100x smaller size, retrieval is proven more effective knowledge database than Wikipedia domain, boosting performance 7.8%-13.7%.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....