German BERT Model for Legal Named Entity Recognition
Named Entity Recognition
Overfitting
Entity linking
Identification
DOI:
10.5220/0011749400003393
Publication Date:
2023-03-04T05:14:17Z
AUTHORS (3)
ABSTRACT
The use of BERT, one the most popular language models, has led to improvements in many Natural Language Processing (NLP) tasks. One such task is Named Entity Recognition (NER) i.e. automatic identification named entities as location, person, organization, etc. from a given text. It also an important base step for NLP tasks information extraction and argumentation mining. Even though there much research done on NER using BERT other same not explored detail when it comes Legal or Tech. applies various techniques sentence similarity specifically legal data. There are only handful models however, none these aimed at documents German. In this paper, we fine-tune model trained German data (German BERT) (LER) dataset. To make sure our overfitting, performed stratified 10-fold cross-validation. results achieve by fine-tuning LER dataset outperform BiLSTM-CRF+ used authors Finally, openly available via HuggingFace.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (10)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....