Lightweight Adapter Tuning for Multilingual Speech Translation
Adapter (computing)
Speech translation
DOI:
10.18653/v1/2021.acl-short.103
Publication Date:
2021-07-27T01:42:51Z
AUTHORS (6)
ABSTRACT
Adapter modules were recently introduced as an efficient alternative to fine-tuning in NLP. tuning consists freezing pretrained parameters of a model and injecting lightweight between layers, resulting the addition only small number task-specific trainable parameters. While adapter was investigated for multilingual neural machine translation, this paper proposes comprehensive analysis adapters speech translation (ST). Starting from different pre-trained models (a ST trained on parallel data or BART (mBART) non-parallel data), we show that can be used to: (a) efficiently specialize specific language pairs with low extra cost terms parameters, (b) transfer automatic recognition (ASR) task mBART task. Experiments offer competitive results full fine-tuning, while being much more parameter-efficient.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (24)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....