HateBERT: Retraining BERT for Abusive Language Detection in English

Offensive Software portability Retraining
DOI: 10.18653/v1/2021.woah-1.3 Publication Date: 2021-07-27T01:42:51Z
ABSTRACT
We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The was trained on RAL-E, large-scale dataset of Reddit comments English from communities banned being offensive, abusive, or hateful that we have curated and made available to the public. present results detailed comparison between general pre-trained retrained version three datasets hate speech tasks. In all datasets, HateBERT outperforms corresponding model. also discuss battery experiments comparing portability fine-tuned models across suggesting is affected by compatibility annotated phenomena.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (112)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....