Low-rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition
Discriminative model
Regularization
Rank (graph theory)
Domain Adaptation
DOI:
10.48550/arxiv.2309.15223
Publication Date:
2023-01-01
AUTHORS (18)
ABSTRACT
We propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up pretraining stage and adapting to specific domains limit their practical use Here we present method decomposition train rescoring model adapt it new using only fraction (0.08%) parameters. These inserted matrices are optimized through discriminative training objective along with correlation-based regularization loss. The proposed Rescore-BERT (LoRB) architecture is evaluated LibriSpeech internal datasets decreased times by factors between 5.4 3.6.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....