MCL-NER: Cross-Lingual Named Entity Recognition via Multi-View Contrastive Learning
Named Entity Recognition
Entity linking
DOI:
10.1609/aaai.v38i17.29843
Publication Date:
2024-03-25T11:59:33Z
AUTHORS (7)
ABSTRACT
Cross-lingual named entity recognition (CrossNER) faces challenges stemming from uneven performance due to the scarcity of multilingual corpora, especially for non-English data. While prior efforts mainly focus on data-driven transfer methods, a significant aspect that has not been fully explored is aligning both semantic and token-level representations across diverse languages. In this paper, we propose Multi-view Contrastive Learning Named Entity Recognition (MCL-NER). Specifically, reframe CrossNER task into problem recognizing relationships between pairs tokens. This approach taps inherent contextual nuances token-to-token connections within entities, allowing us align different A multi-view contrastive learning framework introduced encompass contrasts source, codeswitched, target sentences, as well among relations. By enforcing agreement relational spaces, minimize gap source sentences their counterparts codeswitched sentences. alignment extends tokens, enhancing projection entities We further augment by combining self-training with labeled data unlabeled Our experiments XTREME benchmark, spanning 40 languages, demonstrate superiority MCL-NER over model-based approaches. It achieves substantial increase nearly +2.0 F1 scores broad spectrum establishes itself new state-of-the-art performer.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (10)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....