Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition
Index Terms: quaternion convolutional neural networks
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI]
FOS: Computer and information sciences
Computer Science - Machine Learning
Sound (cs.SD)
deep learning
Machine Learning (stat.ML)
02 engineering and technology
[INFO] Computer Science [cs]
Computer Science - Sound
auto- matic speech recognition
Machine Learning (cs.LG)
Statistics - Machine Learning
Audio and Speech Processing (eess.AS)
FOS: Electrical engineering, electronic engineering, information engineering
0202 electrical engineering, electronic engineering, information engineering
Electrical Engineering and Systems Science - Audio and Speech Processing
DOI:
10.21437/interspeech.2018-1898
Publication Date:
2018-08-28T09:55:42Z
AUTHORS (7)
ABSTRACT
Accepted at INTERSPEECH 2018<br/>Recently, the connectionist temporal classification (CTC) model coupled with recurrent (RNN) or convolutional neural networks (CNN), made it easier to train speech recognition systems in an end-to-end fashion. However in real-valued models, time frame components such as mel-filter-bank energies and the cepstral coefficients obtained from them, together with their first and second order derivatives, are processed as individual elements, while a natural alternative is to process such components as composed entities. We propose to group such elements in the form of quaternions and to process these quaternions using the established quaternion algebra. Quaternion numbers and quaternion neural networks have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies, and to solve many tasks with less learning parameters than real-valued models. This paper proposes to integrate multiple feature views in quaternion-valued convolutional neural network (QCNN), to be used for sequence-to-sequence mapping with the CTC model. Promising results are reported using simple QCNNs in phoneme recognition experiments with the TIMIT corpus. More precisely, QCNNs obtain a lower phoneme error rate (PER) with less learning parameters than a competing model based on real-valued CNNs.<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (59)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....