Self-Prompting Large Language Models for Zero-Shot Open-Domain QA

Zero (linguistics) Open domain
DOI: 10.48550/arxiv.2212.08635 Publication Date: 2022-01-01
ABSTRACT
Open-Domain Question Answering (ODQA) aims to answer questions without explicitly providing specific background documents. This task becomes notably challenging in a zero-shot setting where no data is available train tailored retrieval-reader models. While recent Large Language Models (LLMs) like GPT-3 have demonstrated their effectiveness ODQA using direct prompting methods, these methods still fall short of fully harnessing the potential LLMs when implicitly invoked. In this paper, we propose Self-Prompting framework utilize massive knowledge encoded parameters and strong instruction understanding abilities. Concretely, prompt step by generate multiple pseudo QA pairs with passages explanations entirely from scratch. These generated elements are then utilized for in-context learning. Experimental results show that our method significantly surpasses previous state-of-the-art on three widely-used datasets even achieves comparable performance various customized fine-tuned models full training data. Our code at https://github.com/lockon-n/self-prompting.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....