Improving neural machine translation using gated state network and focal adaptive attention networtk
0202 electrical engineering, electronic engineering, information engineering
02 engineering and technology
DOI:
10.1007/s00521-021-06444-2
Publication Date:
2021-09-04T07:02:41Z
AUTHORS (5)
ABSTRACT
The currently predominant token-to-token attention mechanism has demonstrated its ability to capture word dependencies in neural machine translation. This mechanism treats a sequence as bag-of-words tokens and compute the similarity between tokens without considering their intrinsic interactions. In this paper, we argue that this attention mechanism may miss opportunity of take advantage of the state information through multiple time steps. Thus, we propose a Gated State Network which manipulates the state information flow with sequential characteristics. We also incorporate a Focal Adaptive Attention Network which utilizes a Gaussian distribution to concentrate the attention distribution to a predicted focal position and its neighborhood. Experimental results on WMT’14 English–German and WMT’17 Chinese–English translation tasks demonstrate the effectiveness of the proposed approach.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (36)
CITATIONS (5)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....