Improving Language Models via Plug-and-Play Retrieval Feedback
Benchmark (surveying)
Limiting
DOI:
10.48550/arxiv.2305.14002
Publication Date:
2023-01-01
AUTHORS (5)
ABSTRACT
Large language models (LLMs) exhibit remarkable performance across various NLP tasks. However, they often generate incorrect or hallucinated information, which hinders their practical applicability in real-world scenarios. Human feedback has been shown to effectively enhance the factuality and quality of generated content, addressing some these limitations. this approach is resource-intensive, involving manual input supervision, can be time-consuming expensive. Moreover, it cannot provided during inference, further limiting its utility dynamic interactive applications. In paper, we introduce ReFeed, a novel pipeline designed LLMs by providing automatic retrieval plug-and-play framework without need for expensive fine-tuning. ReFeed first generates initial outputs, then utilizes model acquire relevant information from large document collections, finally incorporates retrieved into in-context demonstration output refinement, thereby limitations more efficient cost-effective manner. Experiments on four knowledge-intensive benchmark datasets demonstrate our proposed could improve over +6.0% under zero-shot setting +2.5% few-shot setting, compared baselines using feedback.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....