ALL-IN-ONE: Multi-Task Learning BERT models for Evaluating Peer Assessments
Feature Learning
Feature (linguistics)
Representation
DOI:
10.48550/arxiv.2110.03895
Publication Date:
2021-01-01
AUTHORS (6)
ABSTRACT
Peer assessment has been widely applied across diverse academic fields over the last few decades and demonstrated its effectiveness. However, advantages of peer can only be achieved with high-quality reviews. Previous studies have found that review comments usually comprise several features (e.g., contain suggestions, mention problems, use a positive tone). Thus, researchers attempted to evaluate peer-review by detecting different using various machine learning deep models. there is no single study investigates multi-task (MTL) model detect multiple simultaneously. This paper presents two MTL models for evaluating leveraging state-of-the-art pre-trained language representation BERT DistilBERT. Our results demonstrate BERT-based significantly outperform previous GloVe-based methods around 6% in F1-score on tasks feature, further improves performance while reducing size.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....