Sequence Level Contrastive Learning for Text Summarization

Feature (linguistics) Representation Sequence (biology) Feature Learning Contrastive analysis
DOI: 10.48550/arxiv.2109.03481 Publication Date: 2021-01-01
ABSTRACT
Contrastive learning models have achieved great success in unsupervised visual representation learning, which maximize the similarities between feature representations of different views same image, while minimize images. In text summarization, output summary is a shorter form input document and they similar meanings. this paper, we propose contrastive model for supervised abstractive where view document, its gold generated summaries as mean them during training. We improve over strong sequence-to-sequence generation (i.e., BART) on three summarization datasets. Human evaluation also shows that our achieves better faithfulness ratings compared to counterpart without objectives.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....