Guiding Attention in Sequence-to-Sequence Models for Dialogue Act Prediction
Leverage (statistics)
Sequence (biology)
DOI:
10.1609/aaai.v34i05.6259
Publication Date:
2020-06-29T19:17:06Z
AUTHORS (6)
ABSTRACT
The task of predicting dialog acts (DA) based on conversational is a key component in the development agents. Accurately DAs requires precise modeling both conversation and global tag dependencies. We leverage seq2seq approaches widely adopted Neural Machine Translation (NMT) to improve modelling sequentiality. Seq2seq models are known learn complex dependencies while currently proposed using linear conditional random fields (CRF) only model local In this work, we introduce tailored for DA classification using: hierarchical encoder, novel guided attention mechanism beam search applied training inference. Compared state art our does not require handcrafted features trained end-to-end. Furthermore, approach achieves an unmatched accuracy score 85% SwDA, state-of-the-art 91.6% MRDA.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (27)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....