- Neural dynamics and brain function
- CCD and CMOS Imaging Sensors
- Neuroscience and Neural Engineering
- Face and Expression Recognition
- Data Mining Algorithms and Applications
- Advanced Chemical Sensor Technologies
- Evolutionary Algorithms and Applications
- Reinforcement Learning in Robotics
- Statistical Mechanics and Entropy
- Data Management and Algorithms
- Artificial Intelligence in Games
- Olfactory and Sensory Function Studies
- Algorithms and Data Compression
- Gaussian Processes and Bayesian Inference
Harvard University
2022-2023
Center for Pain and the Brain
2023
Columbia University
2020-2021
Understanding how feature learning affects generalization is among the foremost goals of modern deep theory. Here, we study ability to learn representations performance a simple class models: Bayesian linear neural networks trained on unstructured Gaussian data. By comparing random models in which all layers are trained, provide detailed characterization interplay between width, depth, data density, and prior mismatch. We show that both display samplewise double-descent behavior presence...
Decoding sensory stimuli from neural activity can provide insight into how the nervous system might interpret physical environment, and facilitates development of brain-machine interfaces. Nevertheless, decoding problem remains a significant open challenge. Here, we present an efficient nonlinear approach for inferring natural scene spiking activities retinal ganglion cells (RGCs). Our uses networks to improve on existing decoders in both accuracy scalability. Trained validated real spike...
Abstract Within a single sniff, the mammalian olfactory system can decode identity and concentration of odorants wafted on turbulent plumes air. Yet, it must do so given access only to noisy, dimensionally-reduced representation odor world provided by receptor neurons. As result, solve compressed sensing problem, relying fact that handful millions possible are present in scene. Inspired this principle, past works have proposed normative models for decoding. However, these not captured unique...
In-context learning (ICL), the remarkable ability to solve a task from only input exemplars, has commonly been assumed be unique hallmark of Transformer models. In this study, we demonstrate that multi-layer perceptrons (MLPs) can also learn in-context. Moreover, find MLPs, and closely related MLP-Mixer models, in-context competitively with Transformers given same compute budget. We further show MLPs outperform on subset ICL tasks designed test relational reasoning. These results suggest is...
Abstract Decoding sensory stimuli from neural activity can provide insight into how the nervous system might interpret physical environment, and facilitates development of brain-machine interfaces. Nevertheless, decoding problem remains a significant open challenge. Here, we present an efficient nonlinear approach for inferring natural scene spiking activities retinal ganglion cells (RGCs). Our uses networks to improve upon existing decoders in both accuracy scalability. Trained validated on...
Dogs and laboratory mice are commonly trained to perform complex tasks by guiding them through a curriculum of simpler ('shaping'). What the principles behind effective shaping strategies? Here, we propose machine learning framework for animal behavior, where an autonomous teacher agent decides its student's task based on transcript successes failures previously assigned tasks. Using teachers that plan in common sequence task, show near-optimal algorithms adaptively alternate between harder...
Given a budget on total model size, one must decide whether to train single, large neural network or combine the predictions of many smaller networks. We study this trade-off for ensembles random-feature ridge regression models. prove that when fixed number trainable parameters are partitioned among $K$ independently trained models, $K=1$ achieves optimal performance, provided parameter is optimally tuned. then derive scaling laws which describe how test risk an ensemble models decays with...