KARL-Trans-NER: Knowledge Aware Representation Learning for Named Entity Recognition using Transformers
Named Entity Recognition
Entity linking
Feature Learning
DOI:
10.48550/arxiv.2111.15436
Publication Date:
2021-01-01
AUTHORS (4)
ABSTRACT
The inception of modeling contextual information using models such as BERT, ELMo, and Flair has significantly improved representation learning for words. It also given SOTA results in almost every NLP task - Machine Translation, Text Summarization Named Entity Recognition, to name a few. In this work, addition these dominant context-aware representations, we propose Knowledge Aware Representation Learning (KARL) Network Recognition (NER). We discuss the challenges existing methods incorporating world knowledge NER show how our proposed could be leveraged overcome those challenges. KARL is based on Transformer Encoder that utilizes large bases represented fact triplets, converts them graph context, extracts essential entity residing inside generate contextualized triplet feature augmentation. Experimental augmentation done can considerably boost performance system achieve better than approaches literature three publicly available datasets, namely CoNLL 2003, CoNLL++, OntoNotes v5. observe generalization application real-world setting from unseen entities.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....