Santosh Kumar Cheekatmalla

ORCID: 0000-0003-1144-4507
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Music and Audio Processing
  • Speech and Audio Processing
  • Speech Recognition and Synthesis
  • Music Technology and Sound Studies
  • Blind Source Separation Techniques
  • Particle physics theoretical and experimental studies
  • Neural Networks and Reservoir Computing
  • Superconducting Materials and Applications
  • Neutrino Physics Research
  • Particle Detector Development and Performance
  • Muon and positron interactions and applications

Amazon (United States)
2021-2022

Seattle University
2022

University of Kentucky
2007-2008

The mean life of the positive muon has been measured to a precision 11 ppm using low-energy, pulsed beam stopped in ferromagnetic target, which was surrounded by scintillator detector array. result, tau(micro)=2.197 013(24) micros, is excellent agreement with previous world average. new average 019(21) micros determines Fermi constant G(F)=1.166 371(6)x10(-5) GeV-2 (5 ppm). Additionally, measurement positive-muon lifetime needed determine nucleon pseudoscalar coupling g(P).

10.1103/physrevlett.99.032001 article EN Physical Review Letters 2007-07-16

Fixed-point (FXP) inference has proven suitable for embedded devices with limited computational resources, and yet model training is continually performed in floating-point (FLP). FXP not been fully explored the non-trivial conversion from FLP to presents unavoidable performance drop. We propose a novel method train obtain convolutional keyword-spotting (KWS) models. combine our methodology two quantization-aware-training (QAT) techniques – squashed weight distribution absolute cosine...

10.1109/icassp49357.2023.10095977 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023-05-05

In this work, we propose Tiny-CRNN (Tiny Convolutional Recurrent Neural Network) models applied to the problem of wakeword detection, and augment them with scaled dot product attention. We find that, compared Network models, False Accepts in a 250k parameter budget can be reduced by 25% 10% reduction size using based on architecture, get up 32% at 50k 75% word-level Dense models. discuss solutions challenging performing inference streaming audio as well differences start-end index errors...

10.1109/asru51503.2021.9688299 article EN 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) 2021-12-13

10.1016/j.nima.2008.03.121 article EN Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 2008-04-12

In this work, we propose small footprint Convolutional Recurrent Neural Network models applied to the problem of wakeword detection and augment them with scaled dot product attention. We find that false accepts compared in a 250k parameter budget can be reduced by 25% 10% reduction size using CRNNs, get up 32% improvement at 50k 75% word-level Dense models. discuss solutions challenging performing inference on streaming audio as well differences start-end index errors latency comparison CNN,...

10.48550/arxiv.2011.12941 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Fixed-point (FXP) inference has proven suitable for embedded devices with limited computational resources, and yet model training is continually performed in floating-point (FLP). FXP not been fully explored the non-trivial conversion from FLP to presents unavoidable performance drop. We propose a novel method train obtain convolutional keyword-spotting (KWS) models. combine our methodology two quantization-aware-training (QAT) techniques - squashed weight distribution absolute cosine...

10.48550/arxiv.2303.02284 preprint EN other-oa arXiv (Cornell University) 2023-01-01

We propose a novel 2-stage sub 8-bit quantization aware training algorithm for all components of 250K parameter feedforward, streaming, state-free keyword spotting model. For the 1st-stage, we adapt recently proposed technique using non-linear transformation with tanh(.) on dense layer weights. In 2nd-stage, use linear methods rest network, including other parameters (bias, gain, batchnorm), inputs, and activations. conduct large scale experiments, 26,000 hours de-identified production,...

10.48550/arxiv.2207.06920 preprint EN cc-by arXiv (Cornell University) 2022-01-01

In this work, we propose Tiny-CRNN (Tiny Convolutional Recurrent Neural Network) models applied to the problem of wakeword detection, and augment them with scaled dot product attention. We find that, compared Network models, False Accepts in a 250k parameter budget can be reduced by 25% 10% reduction size using based on architecture, get up 32% at 50k 75% word-level Dense models. discuss solutions challenging performing inference streaming audio as well differences start-end index errors...

10.48550/arxiv.2109.14725 preprint EN other-oa arXiv (Cornell University) 2021-01-01
Coming Soon ...