Exploring Cross-sentence Contexts for Named Entity Recognition with BERT

Named Entity Recognition Entity linking Sequence labeling
DOI: 10.18653/v1/2020.coling-main.78 Publication Date: 2021-01-08T13:58:31Z
ABSTRACT
Named entity recognition (NER) is frequently addressed as a sequence classification task with each input consisting of one sentence text. It nevertheless clear that useful information for NER often found also elsewhere in Recent self-attention models like BERT can both capture long-distance relationships and represent inputs several sentences. This creates opportunities adding cross-sentence natural language processing tasks. paper presents systematic study exploring the use using five languages. We find context additional sentences to systematically increases performance. Multiple samples allows us predictions different contexts. propose straightforward method, Contextual Majority Voting (CMV), combine these demonstrate this further increase Evaluation on established datasets, including CoNLL’02 CoNLL’03 benchmarks, demonstrates our proposed approach improve state-of-the-art results English, Dutch, Finnish, achieves best reported BERT-based German, par other approaches Spanish. release all methods implemented work under open licenses.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (27)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....