Zero-Shot Listwise Document Reranking with a Large Language Model
Pointwise
Relevance
Pointwise mutual information
Zero (linguistics)
DOI:
10.48550/arxiv.2305.02156
Publication Date:
2023-01-01
AUTHORS (4)
ABSTRACT
Supervised ranking methods based on bi-encoder or cross-encoder architectures have shown success in multi-stage text tasks, but they require large amounts of relevance judgments as training data. In this work, we propose Listwise Reranker with a Large Language Model (LRL), which achieves strong reranking effectiveness without using any task-specific Different from the existing pointwise methods, where documents are scored independently and ranked according to scores, LRL directly generates reordered list document identifiers given candidate documents. Experiments three TREC web search datasets demonstrate that not only outperforms zero-shot when first-stage retrieval results, can also act final-stage reranker improve top-ranked results method for improved efficiency. Additionally, apply our approach subsets MIRACL, recent multilingual dataset, showing its potential generalize across different languages.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....