TENER: Adapting Transformer Encoder for Named Entity Recognition
Named Entity Recognition
DOI:
10.48550/arxiv.1911.04474
Publication Date:
2019-01-01
AUTHORS (4)
ABSTRACT
The Bidirectional long short-term memory networks (BiLSTM) have been widely used as an encoder in models solving the named entity recognition (NER) task. Recently, Transformer is broadly adopted various Natural Language Processing (NLP) tasks owing to its parallelism and advantageous performance. Nevertheless, performance of NER not good it other NLP tasks. In this paper, we propose TENER, a architecture adopting adapted Encoder model character-level features word-level features. By incorporating direction relative distance aware attention un-scaled attention, prove Transformer-like just effective for
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....