Dialogue Distillation: Open-Domain Dialogue Augmentation Using Unpaired Data

Benchmark (surveying)
DOI: 10.18653/v1/2020.emnlp-main.277 Publication Date: 2020-11-29T14:51:46Z
ABSTRACT
Recent advances in open-domain dialogue systems rely on the success of neural models that are trained large-scale data. However, collecting data is usually time-consuming and labor-intensive. To address this dilemma, we propose a novel augmentation method for training by utilizing unpaired Specifically, data-level distillation process first proposed to construct augmented dialogues where both post response retrieved from A ranking module employed filter out low-quality dialogues. Further, model-level distill teacher model high-quality paired pairs, thereby preventing being affected noise Automatic manual evaluation indicates our can produce pairs with diverse contents, improve performance competitive baselines.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....