A Simple and Effective Unified Encoder for Document-Level Machine Translation
Representation
Baseline (sea)
DOI:
10.18653/v1/2020.acl-main.321
Publication Date:
2020-07-29T14:14:43Z
AUTHORS (3)
ABSTRACT
Most of the existing models for document-level machine translation adopt dual-encoder structures. The representation source sentences and contexts are modeled with two separate encoders. Although these can make use contexts, they do not fully model interaction between sentences, directly adapt to recent pre-training (e.g., BERT) which encodes multiple a single encoder. In this work, we propose simple effective unified encoder that outperform baseline in terms BLEU METEOR scores. Moreover, further boost performance our proposed model.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (21)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....