mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval
FOS: Computer and information sciences
Computer Science - Computation and Language
Computation and Language (cs.CL)
Information Retrieval (cs.IR)
Computer Science - Information Retrieval
DOI:
10.18653/v1/2024.emnlp-industry.103
Publication Date:
2024-11-27T22:28:12Z
AUTHORS (13)
ABSTRACT
20 pages, 5 figures<br/>We present systematic efforts in building long-context multilingual text representation model (TRM) and reranker from scratch for text retrieval. We first introduce a text encoder (base size) enhanced with RoPE and unpadding, pre-trained in a native 8192-token context (longer than 512 of previous multilingual encoders). Then we construct a hybrid TRM and a cross-encoder reranker by contrastive learning. Evaluations show that our text encoder outperforms the same-sized previous state-of-the-art XLM-R. Meanwhile, our TRM and reranker match the performance of large-sized state-of-the-art BGE-M3 models and achieve better results on long-context retrieval benchmarks. Further analysis demonstrate that our proposed models exhibit higher efficiency during both training and inference. We believe their efficiency and effectiveness could benefit various researches and industrial applications.<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (14)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....