- Neural Networks and Applications
- Neural dynamics and brain function
- Advanced Memory and Neural Computing
- Computability, Logic, AI Algorithms
- Ferroelectric and Negative Capacitance Devices
- Machine Learning and Algorithms
- Cellular Automata and Applications
- Gene Regulatory Network Analysis
- Evolutionary Algorithms and Applications
- Neural Networks and Reservoir Computing
- Fuzzy Logic and Control Systems
- Bioinformatics and Genomic Networks
- EEG and Brain-Computer Interfaces
- Reinforcement Learning in Robotics
- Functional Brain Connectivity Studies
- Photoreceptor and optogenetics research
- Evolution and Genetic Dynamics
- Computational Drug Discovery Methods
- Natural Language Processing Techniques
- Quantum Computing Algorithms and Architecture
- Topic Modeling
- Control Systems and Identification
- Domain Adaptation and Few-Shot Learning
- Mathematical Biology Tumor Growth
- Evolutionary Game Theory and Cooperation
University of Massachusetts Amherst
2014-2024
Amherst College
2001-2023
Mohamed bin Zayed University of Artificial Intelligence
2023
Massachusetts Institute of Technology
2000-2021
Defense Advanced Research Projects Agency
2019-2020
University of Virginia
2018
University of Massachusetts Boston
2015
Dynamic Systems (United States)
2013-2014
Harvard University
2008-2010
Rutgers, The State University of New Jersey
1991-2005
We present a novel clustering method using the approach of support vector machines. Data points are mapped by means Gaussian kernel to high dimensional feature space, where we search for minimal enclosing sphere. This sphere, when back data can separate into several components, each cluster points. simple algorithm identifying these clusters. The width controls scale at which is probed while soft margin constant helps coping with outliers and overlapping structure dataset explored varying...
Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful Turing machines. This work focuses on another network which is popular in control applications and has found very effective at learning a variety of problems. These are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), therefore called NARX networks. As opposed other networks, limited feedback comes only from the output neuron rather than hidden...
Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is reactivation of neuronal activity patterns representing those memories. artificial networks, such memory replay can implemented as 'generative replay', which successfully - and surprisingly efficiently prevent forgetting toy examples even in...
The development of spiking neural network simulation software is a critical component enabling the modeling systems and biologically inspired algorithms. Existing frameworks support wide range functionality, abstraction levels, hardware devices, yet are typically not suitable for rapid prototyping or application to problems in domain machine learning. In this paper, we describe new Python package networks, specifically geared towards learning reinforcement Our software, called...
Extensive efforts have been made to prove the Church-Turing thesis, which suggests that all realizable dynamical and physical systems cannot be more powerful than classical models of computation. A simply described but highly chaotic system called analog shift map is presented here, has computational power beyond Turing limit (super-Turing); it computes exactly like neural networks machines. This conjectured describe natural phenomena.
This paper deals with finite networks which consist of interconnections synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" scalar nonlinearity to linear combination the previous states all units. We prove that one may simulate Turing Machines rational nets. In particular, can do this in time, and there is net made up about 1,000 processors computes universal partial-recursive function. Products (high order nets) are not required, contrary what had...
Abstract Though widely hypothesized, limited evidence exists that human brain functions organize in global gradients of abstraction starting from sensory cortical inputs. Hierarchical representation is accepted computational networks and tentatively visual neuroscience, yet no direct holistic demonstrations exist vivo . Our methods developed network models enriched with tiered directionality, by including input locations, a critical feature for localizing generally. Grouped primary cortices...
Replay is the reactivation of one or more neural patterns that are similar to activation experienced during past waking experiences. was first observed in biological networks sleep, and it now thought play a critical role memory formation, retrieval, consolidation. Replay-like mechanisms have been incorporated deep artificial learn over time avoid catastrophic forgetting previous knowledge. algorithms successfully used wide range learning methods within supervised, unsupervised,...
We present a novel kernel method for data clustering using description of the by support vectors. The reflects projection points from space to high dimensional feature space. Cluster boundaries are defined as spheres in space, which represent complex geometric shapes utilize this representation construct simple algorithm.
We study the computational capabilities of a biologically inspired neural model where synaptic weights, connectivity pattern, and number neurons can evolve over time rather than stay static. Our focuses on mere concept plasticity so that nature updates is assumed to be not constrained. In this context, we show so-called plastic recurrent networks (RNNs) are capable precise super-Turing power--as static analog networks--irrespective whether their weights modeled by rational or real numbers,...
We present a system comprising hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many the features SOMs. Networks are trained in an unsupervised manner to learn lattice filters via excitatory-inhibitory interactions among populations neurons. develop and test various inhibition strategies, such as growing inter-neuron distance two distinct levels inhibition. The quality learning algorithm is evaluated using examples known labels. Several...
Significance The prefrontal cortex (PFC) enables humans’ ability to flexibly adapt new environments and circumstances. Disruption of this is often a hallmark disease. Neural network models have provided tools study how the PFC stores uses information, yet mechanisms underlying able learn about situations without disrupting preexisting knowledge remain unknown. We use neural architecture show hierarchical gating can naturally support adaptive learning while preserving memories from prior...
The computational power of recurrent neural networks is shown to depend ultimately on the complexity real constants (weights) network. complexity, or information contents, weights measured by a variant resource-bounded Kolmogorov (1965) taking into account time required for constructing numbers. In particular, we reveal full and proper hierarchy nonuniform classes associated with having increasing complexity.
We analyze a class of ordinary differential equations representing simplified model genetic network. In this network, the genes control production rates other by logical function. The dynamics in these are represented directed graph on an n-dimensional hypercube (n-cube) which each edge is unique orientation. vertices n-cube correspond to orthants state space, and edges boundaries between adjacent orthants. can be symbolically. Starting from point boundary neighboring orthants, equation...