RaFe: Ranking Feedback Improves Query Rewriting for RAG
FOS: Computer and information sciences
Computer Science - Computation and Language
Artificial Intelligence (cs.AI)
Computer Science - Artificial Intelligence
Computation and Language (cs.CL)
Information Retrieval (cs.IR)
Computer Science - Information Retrieval
DOI:
10.48550/arxiv.2405.14431
Publication Date:
2024-05-23
AUTHORS (10)
ABSTRACT
As Large Language Models (LLMs) and Retrieval Augmentation Generation (RAG) techniques have evolved, query rewriting has been widely incorporated into the RAG system for downstream tasks like open-domain QA. Many works attempted to utilize small models with reinforcement learning rather than costly LLMs improve rewriting. However, current methods require annotations (e.g., labeled relevant documents or answers) predesigned rewards feedback, which lack generalization, fail signals tailored In this paper, we propose ours, a framework training free of annotations. By leveraging publicly available reranker, ours~provides feedback aligned well objectives. Experimental results demonstrate that ours~can obtain better performance baselines.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....