Pretraining-Based Natural Language Generation for Text Summarization
Sequence (biology)
Text generation
ENCODE
DOI:
10.48550/arxiv.1902.09243
Publication Date:
2019-01-01
AUTHORS (3)
ABSTRACT
In this paper, we propose a novel pretraining-based encoder-decoder framework, which can generate the output sequence based on input in two-stage manner. For encoder of our model, encode into context representations using BERT. decoder, there are two stages first stage, use Transformer-based decoder to draft sequence. second mask each word and feed it BERT, then by combining representation generated predict refined for masked position. To best knowledge, approach is method applies BERT text generation tasks. As step direction, evaluate proposed summarization task. Experimental results show that model achieves new state-of-the-art both CNN/Daily Mail New York Times datasets.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....