Metamaterials you can talk to: Speech recognition with elastic neural networks
Feature (linguistics)
SIGNAL (programming language)
DOI:
10.1121/10.0010873
Publication Date:
2022-05-09T18:16:16Z
AUTHORS (6)
ABSTRACT
To detect spoken commands, smart devices (for example, a speaker with Alexa or Siri) continuously convert acoustic waves to electronic signals, translate them into the digital domain, and analyze in signal processor. Each of these steps constantly consumes energy, imposing need for tethered operation large batteries. We propose solve this problem using elastic neural networks, metamaterials consisting arrays coupled (potentially nonlinear) resonators. The frequencies couplings resonators are optimized maximise speech classification accuracy (energy transmitted when excited one word but not another). Even purely linear metastructures, we observe binary accuracies exceeding 90% number pairs words. This is demonstrated on dataset from diverse group speakers. attain results, have developed refined modelling techniques involving localised oscillations machine learning. A unique feature metamaterial-based processing that entirely passive, requiring no external energy. possible due very low energy dissipation waves.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....