- Neural dynamics and brain function
- Advanced Memory and Neural Computing
- Neural Networks and Applications
- Neural Networks and Reservoir Computing
- Functional Brain Connectivity Studies
- Ferroelectric and Negative Capacitance Devices
- Physics and Engineering Research Articles
- Neuroscience and Neuropharmacology Research
- Visual perception and processing mechanisms
- Cell Image Analysis Techniques
- History and Theory of Mathematics
- Corporate Governance and Management
- Photoreceptor and optogenetics research
- Mathematics and Applications
- Memory and Neural Mechanisms
- Algebraic and Geometric Analysis
- Complex Systems and Time Series Analysis
- Blind Source Separation Techniques
- Sparse and Compressive Sensing Techniques
- Cellular Automata and Applications
- Generative Adversarial Networks and Image Synthesis
- CCD and CMOS Imaging Sensors
- EEG and Brain-Computer Interfaces
- Neuroscience and Neural Engineering
- stochastic dynamics and bifurcation
University of California, Berkeley
2016-2025
Center for Theoretical Biological Physics
2008-2025
Intel (United States)
2020-2024
Los Angeles Mission College
2021-2023
Neuroscience Institute
2023
University of California, San Francisco
2020
Berkeley College
2018
University of Münster
1955-2012
University of California System
2007
Universität Ulm
1996-2003
Chaos, or exponential sensitivity to small perturbations, appears everywhere in nature. Moreover, chaos is predicted play diverse functional roles living systems. A method for detecting from empirical measurements should therefore be a key component of the biologist's toolkit. But, classic chaos-detection tools are highly sensitive measurement noise and break down common edge cases, making it difficult detect domains, like biology, where noisy. However, newer promise overcome these...
The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables—very different from the stateless neuron models deep learning. next version of Intel's research processor, Loihi 2, supports a wide range stateful fully programmable dynamics. Here we showcase advanced that can be to efficiently process streaming data simulation experiments on emulated 2 hardware. In one example, Resonate-and-Fire (RF) compute Short Time Fourier...
Significance What changes in the brain when we lose consciousness? One possibility is that loss of consciousness corresponds to a transition brain’s electric activity away from edge-of-chaos criticality, or knife’s edge between stability and chaos. Recent mathematical developments have produced tools for testing this hypothesis, which apply cortical recordings diverse states. We show cortex indeed poised near boundary chaos during conscious states transitions unconsciousness disrupts...
Although neuronal spikes can be readily detected from extracellular recordings, synaptic and subthreshold activity remains undifferentiated within the local field potential (LFP). In hippocampus, neurons discharge selectively when rat is at certain locations, while LFPs single anatomical sites exhibit no such place-tuning. Nonetheless, because representation of position sparse distributed, we hypothesized that spatial information recovered multiple-site LFP recordings. Using high-density...
To accommodate structured approaches of neural computation, we propose a class recurrent networks for indexing and storing sequences symbols or analog data vectors. These with randomized input weights orthogonal implement coding principles previously described in vector symbolic architectures (VSA) leverage properties reservoir computing. In general, the storage computing is lossy, crosstalk noise limits retrieval accuracy information capacity. A novel theory to optimize memory performance...
This article reviews recent progress in the development of computing framework
A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., implementations of networks interacting binary spin variables. Most machines leverage second-order interactions although important classes problems, such as satisfiability map more seamlessly with higher-order interactions. Here, we demonstrate that can solve resource-efficiently in terms the number variables and their connections when compared traditional machines. Further, our...
Discovering the structure underlying observed data is a recurring problem in machine learning with important applications neuroscience. It also primary function of brain. When can be actively collected context closed action-perception loop, behavior becomes critical determinant efficiency. Psychologists studying exploration and curiosity humans animals have long argued that itself motivator behavior. However, theoretical basis learning-driven not well understood. Previous computational...
Significance This work makes 2 contributions. First, we present a neural network model of associative memory that stores and retrieves sparse patterns complex variables. can store analog information as fixed-point attractors in the domain; it is governed by an energy function has increased capacity compared to early models. Second, translate attractor networks into spiking networks, where timing spike indicates phase number. We show fixed points correspond stable periodic patterns. It...
An outstanding problem in neuroscience is to understand how information integrated across the many modules of brain. While classic information-theoretic measures have transformed our understanding feedforward processing brain's sensory periphery, comparable for flow massively recurrent networks rest brain been lacking. To address this, recent work theory has produced a sound measure network-wide "integrated information", which can be estimated from time-series data. But, computational hurdle...
ABSTRACT Neurodata Without Borders: Neurophysiology (NWB:N) is a data standard for neurophysiology, providing neuroscientists with common to share, archive, use, and build analysis tools neurophysiology data. With NWB:N version 2.0 (NWB:N 2.0) we made significant advances towards creating usable standard, software ecosystem, vibrant community standardizing In this manuscript focus in particular on the schema present an accessible neurophysiology.
Variable binding is a cornerstone of symbolic reasoning and cognition. But how can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, neural network researchers for many decades. One type model that naturally includes operation vector architectures (VSAs). In contrast to other proposals variable binding, the VSAs dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding combinatoric...
Abstract In this perspective article, we consider the critical issue of data and other research object standardisation and, specifically, how international collaboration, organizations such as International Neuroinformatics Coordinating Facility (INCF) can encourage that emerging neuroscience be Findable, Accessible, Interoperable, Reusable (FAIR) . As neuroscientists engaged in sharing integration multi-modal multiscale data, see current insufficiency standards a major impediment...
Vector space models for symbolic processing that encode symbols by random vectors have been proposed in cognitive science and connectionist communities under the names Symbolic Architecture (VSA), and, synonymously, Hyperdimensional (HD) computing [22, 31, 46]. In this paper, we generalize VSAs to function spaces mapping continuous-valued data into a vector such inner product between representations of any two points approximately represents similarity kernel. By analogy VSA, call new...