Enhancing Semantic Understanding with Self-Supervised Methods for Abstractive Dialogue Summarization
FOS: Computer and information sciences
Computer Science - Computation and Language
Artificial Intelligence (cs.AI)
Computer Science - Artificial Intelligence
0202 electrical engineering, electronic engineering, information engineering
02 engineering and technology
Computation and Language (cs.CL)
DOI:
10.21437/interspeech.2021-1270
Publication Date:
2021-08-27T01:59:39Z
AUTHORS (5)
ABSTRACT
Contextualized word embeddings can lead to state-of-the-art performances in natural language understanding.Recently, a pre-trained deep contextualized text encoder such as BERT has shown its potential improving tasks including abstractive summarization.Existing approaches dialogue summarization focus on incorporating large model into task trained large-scale corpora consisting of news articles rather than dialogues multiple speakers.In this paper, we introduce self-supervised methods compensate shortcomings train model.Our principle is detect incoherent information flows using pretext enhance BERT's ability contextualize the representations.We build and fine-tune an shared encoder-decoder architecture enhanced BERT.We empirically evaluate our summarizer with SAMSum corpus, recently introduced dataset summaries.All have contributed improvements summary measured ROUGE scores.Through extensive ablation study, also present sensitivity analysis critical hyperparameters, probabilities switching utterances masking interlocutors.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....