- Music and Audio Processing
- Speech and Audio Processing
- Speech Recognition and Synthesis
- Music Technology and Sound Studies
- Blind Source Separation Techniques
- Particle physics theoretical and experimental studies
- Neural Networks and Reservoir Computing
- Superconducting Materials and Applications
- Neutrino Physics Research
- Particle Detector Development and Performance
- Muon and positron interactions and applications
Amazon (United States)
2021-2022
Seattle University
2022
University of Kentucky
2007-2008
The mean life of the positive muon has been measured to a precision 11 ppm using low-energy, pulsed beam stopped in ferromagnetic target, which was surrounded by scintillator detector array. result, tau(micro)=2.197 013(24) micros, is excellent agreement with previous world average. new average 019(21) micros determines Fermi constant G(F)=1.166 371(6)x10(-5) GeV-2 (5 ppm). Additionally, measurement positive-muon lifetime needed determine nucleon pseudoscalar coupling g(P).
Fixed-point (FXP) inference has proven suitable for embedded devices with limited computational resources, and yet model training is continually performed in floating-point (FLP). FXP not been fully explored the non-trivial conversion from FLP to presents unavoidable performance drop. We propose a novel method train obtain convolutional keyword-spotting (KWS) models. combine our methodology two quantization-aware-training (QAT) techniques – squashed weight distribution absolute cosine...
In this work, we propose Tiny-CRNN (Tiny Convolutional Recurrent Neural Network) models applied to the problem of wakeword detection, and augment them with scaled dot product attention. We find that, compared Network models, False Accepts in a 250k parameter budget can be reduced by 25% 10% reduction size using based on architecture, get up 32% at 50k 75% word-level Dense models. discuss solutions challenging performing inference streaming audio as well differences start-end index errors...
In this work, we propose small footprint Convolutional Recurrent Neural Network models applied to the problem of wakeword detection and augment them with scaled dot product attention. We find that false accepts compared in a 250k parameter budget can be reduced by 25% 10% reduction size using CRNNs, get up 32% improvement at 50k 75% word-level Dense models. discuss solutions challenging performing inference on streaming audio as well differences start-end index errors latency comparison CNN,...
Fixed-point (FXP) inference has proven suitable for embedded devices with limited computational resources, and yet model training is continually performed in floating-point (FLP). FXP not been fully explored the non-trivial conversion from FLP to presents unavoidable performance drop. We propose a novel method train obtain convolutional keyword-spotting (KWS) models. combine our methodology two quantization-aware-training (QAT) techniques - squashed weight distribution absolute cosine...
We propose a novel 2-stage sub 8-bit quantization aware training algorithm for all components of 250K parameter feedforward, streaming, state-free keyword spotting model. For the 1st-stage, we adapt recently proposed technique using non-linear transformation with tanh(.) on dense layer weights. In 2nd-stage, use linear methods rest network, including other parameters (bias, gain, batchnorm), inputs, and activations. conduct large scale experiments, 26,000 hours de-identified production,...
In this work, we propose Tiny-CRNN (Tiny Convolutional Recurrent Neural Network) models applied to the problem of wakeword detection, and augment them with scaled dot product attention. We find that, compared Network models, False Accepts in a 250k parameter budget can be reduced by 25% 10% reduction size using based on architecture, get up 32% at 50k 75% word-level Dense models. discuss solutions challenging performing inference streaming audio as well differences start-end index errors...