Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues
Utterance
DOI:
10.1609/aaai.v35i16.17666
Publication Date:
2022-09-08T20:15:32Z
AUTHORS (6)
ABSTRACT
Building an intelligent dialogue system with the ability to select a proper response according multi-turn context is great challenging task. Existing studies focus on building context-response matching model various neural architectures or pretrained language models (PLMs) and typically learning single prediction These approaches overlook many potential training signals contained in data, which might be beneficial for understanding produce better features prediction. Besides, retrieved from existing systems supervised by conventional way still faces some critical challenges, including incoherence inconsistency. To address these issues, this paper, we propose auxiliary self-supervised tasks designed data based pre-trained models. Specifically, introduce four next session prediction, utterance restoration, detection consistency discrimination, jointly train PLM-based selection multi-task manner. By means, can guide of achieve local optimum more response. Experiment results two benchmarks indicate that proposed bring significant improvement retrieval-based dialogues, our achieves new state-of-the-art both datasets.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (12)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....