Structured Attention for Unsupervised Dialogue Structure Induction
Grammar induction
Representation
Inductive bias
DOI:
10.18653/v1/2020.emnlp-main.148
Publication Date:
2020-11-29T09:51:46Z
AUTHORS (8)
ABSTRACT
Inducing a meaningful structural representation from one or set of dialogues is crucial but challenging task in computational linguistics. Advancement made this area critical for dialogue system design and discourse analysis. It can also be extended to solve grammatical inference. In work, we propose incorporate structured attention layers into Variational Recurrent Neural Network (VRNN) model with discrete latent states learn structure an unsupervised fashion. Compared vanilla VRNN, enables focus on different parts the source sentence embeddings while enforcing inductive bias. Experiments show that two-party datasets, VRNN learns semantic structures are similar templates used generate corpus. While multi-party our interactive demonstrating its capability distinguishing speakers addresses, automatically disentangling without explicit human annotation.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (4)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....