Enriching and Controlling Global Semantics for Text Summarization
Multi-document summarization
Representation
DOI:
10.18653/v1/2021.emnlp-main.744
Publication Date:
2021-12-17T03:56:42Z
AUTHORS (4)
ABSTRACT
Recently, Transformer-based models have been proven effective in the abstractive summarization task by creating fluent and informative summaries. Nevertheless, these still suffer from short-range dependency problem, causing them to produce summaries that miss key points of document. In this paper, we attempt address issue introducing a neural topic model empowered with normalizing flow capture global semantics document, which are then integrated into model. addition, avoid overwhelming effect on contextualized representation, introduce mechanism control amount supplied text generation module. Our method outperforms state-of-the-art five common datasets, namely CNN/DailyMail, XSum, Reddit TIFU, arXiv, PubMed.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (16)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....