Model-Agnostic Meta-Learning for Multilingual Hate Speech Detection

Transfer of learning Initialization Scarcity Emotion detection Domain Adaptation
DOI: 10.48550/arxiv.2303.02513 Publication Date: 2023-01-01
ABSTRACT
Hate speech in social media is a growing phenomenon, and detecting such toxic content has recently gained significant traction the research community. Existing studies have explored fine-tuning language models (LMs) to perform hate detection, these solutions yielded performance. However, most of are limited only English, neglecting bulk hateful that generated other languages, particularly low-resource languages. Developing classifier captures nuances with data extremely challenging. To fill gap, we propose HateMAML, model-agnostic meta-learning-based framework effectively performs detection HateMAML utilizes self-supervision strategy overcome limitation scarcity produces better LM initialization for fast adaptation an unseen target (i.e., cross-lingual transfer) or datasets domain generalization). Extensive experiments conducted on five across eight different The results show outperforms state-of-the-art baselines by more than 3% cross-domain multilingual transfer setting. We also conduct ablation analyze characteristics HateMAML.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....