Environmental Sound Classification with Parallel Temporal-Spectral Attention
0301 basic medicine
FOS: Computer and information sciences
Computer Science - Machine Learning
Sound (cs.SD)
0303 health sciences
03 medical and health sciences
Audio and Speech Processing (eess.AS)
FOS: Electrical engineering, electronic engineering, information engineering
Computer Science - Sound
Electrical Engineering and Systems Science - Audio and Speech Processing
Machine Learning (cs.LG)
DOI:
10.21437/interspeech.2020-1219
Publication Date:
2020-10-27T09:22:11Z
AUTHORS (4)
ABSTRACT
submitted to INTERSPEECH2020<br/>Convolutional neural networks (CNN) are one of the best-performing neural network architectures for environmental sound classification (ESC). Recently, temporal attention mechanisms have been used in CNN to capture the useful information from the relevant time frames for audio classification, especially for weakly labelled data where the onset and offset times of the sound events are not applied. In these methods, however, the inherent spectral characteristics and variations are not explicitly exploited when obtaining the deep features. In this paper, we propose a novel parallel temporal-spectral attention mechanism for CNN to learn discriminative sound representations, which enhances the temporal and spectral features by capturing the importance of different time frames and frequency bands. Parallel branches are constructed to allow temporal attention and spectral attention to be applied respectively in order to mitigate interference from the segments without the presence of sound events. The experiments on three environmental sound classification (ESC) datasets and two acoustic scene classification (ASC) datasets show that our method improves the classification performance and also exhibits robustness to noise.<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (23)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....