Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
Representation
DOI:
10.48550/arxiv.1804.08069
Publication Date:
2018-01-01
AUTHORS (3)
ABSTRACT
The encoder-decoder dialog model is one of the most prominent methods used to build systems in complex domains. Yet it limited because cannot output interpretable actions as traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing models for response generation. Building upon variational autoencoders (VAEs), we two novel models, DI-VAE and DI-VST improve VAEs discover semantics via either auto encoding or context predicting. Our have been validated on real-world datasets semantic representations enhance
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....