Knowledge Distillation for Tiny Speech Enhancement with Latent Feature Augmentation
Feature (linguistics)
DOI:
10.21437/interspeech.2024-1383
Publication Date:
2024-09-01T07:10:12Z
AUTHORS (3)
ABSTRACT
Recent deep neural network (DNN) models have achieved high performance in speech enhancement. However, deploying such complex resource-constrained environments can be challenging without significant degradation. Knowledge distillation (KD), a technique where smaller (student) model is trained to mimic the behavior of larger, more (teacher) model, has emerged as popular approach address this challenge. In paper, we propose feature-augmentation based knowledge method for enhancement, leveraging information stored intermediate latent features DNN teacher train smaller, efficient student model. Experimental results on VoiceBank+DEMAND dataset demonstrate effectiveness proposed
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....