Adapt and Prune Strategy for Multilingual Speech Foundational Model on Low-resourced Languages
Adapter (computing)
DOI:
10.18653/v1/2023.mrl-1.7
Publication Date:
2023-12-10T21:58:19Z
AUTHORS (4)
ABSTRACT
While foundational speech models such as Whisper demonstrate state-of-the-art performance across various benchmarks, it necessitates an adaptation process for specific downstream tasks, particularly in low-resourced languages.Classical full fine-tuning (FFT) successfully adapts the model to but requires computational resources proportional extensive size.Parameterefficient (PEFT) methods introduced address this issue effectively adapt a given with less trainable parameters, demand higher inference complexities increased number of overall parameters.In response these issues, we propose PEPSI-a Parameter-Efficient adaPtation Speech foundatIonal model.Our PEPSI integrates compact adapter module into decoder layers and removes neurons irrelevant task.Through experiments, showcase that achieves surpassing PEFT comparable FFT, while significantly reducing parameters utilize on languages require additional adaptation.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....