ProKD: An Unsupervised Prototypical Knowledge Distillation Network for Zero-Resource Cross-Lingual Named Entity Recognition

Retraining Benchmark (surveying)
DOI: 10.1609/aaai.v37i11.26507 Publication Date: 2023-06-27T18:05:18Z
ABSTRACT
For named entity recognition (NER) in zero-resource languages, utilizing knowledge distillation methods to transfer language-independent from the rich-resource source languages is an effective means. Typically, these approaches adopt a teacher-student architecture, where teacher network trained language, and student seeks learn expected perform well target language. Despite impressive performance achieved by methods, we argue that they have two limitations. Firstly, fails effectively shared across due differences feature distribution between languages. Secondly, acquires all of its ignores learning language-specific knowledge. Undesirably, limitations would hinder model's This paper proposes unsupervised prototype (ProKD) address issues. Specifically, ProKD presents contrastive learning-based alignment method achieve class adjusting prototypes' distance boosting network's capacity acquire In addition, introduces self-training intrinsic structure language retraining on data using samples' information prototypes, thereby enhancing ability Extensive experiments three benchmark cross-lingual NER datasets demonstrate effectiveness our approach.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)