Enhanced Aspect-Based Sentiment Analysis Models with Progressive Self-supervised Attention Learning

Sentiment Analysis Regularization Code (set theory)
DOI: 10.48550/arxiv.2103.03446 Publication Date: 2021-01-01
ABSTRACT
In aspect-based sentiment analysis (ABSA), many neural models are equipped with an attention mechanism to quantify the contribution of each context word prediction. However, such a suffers from one drawback: only few frequent words polarities tended be taken into consideration for final decision while abundant infrequent ignored by models. To deal this issue, we propose progressive self-supervised learning approach attentional ABSA approach, iteratively perform prediction on all training instances, and continually learn useful supervision information in meantime. During training, at iteration, highest impact prediction, identified based their weights or gradients, extracted as active/misleading influence correct/incorrect instance. Words way masked subsequent iterations. exploit these refining models, augment conventional objective regularization term that encourages not take full advantage active but also decrease those misleading words. We integrate proposed three state-of-the-art Experiment results in-depth analyses show our yields better significantly enhances performance release source code trained https://github.com/DeepLearnXMU/PSSAttention.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....