Hava T. Siegelmann

ORCID: 0000-0003-4938-8723
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Neural Networks and Applications
  • Neural dynamics and brain function
  • Advanced Memory and Neural Computing
  • Computability, Logic, AI Algorithms
  • Ferroelectric and Negative Capacitance Devices
  • Machine Learning and Algorithms
  • Cellular Automata and Applications
  • Gene Regulatory Network Analysis
  • Evolutionary Algorithms and Applications
  • Neural Networks and Reservoir Computing
  • Fuzzy Logic and Control Systems
  • Bioinformatics and Genomic Networks
  • EEG and Brain-Computer Interfaces
  • Reinforcement Learning in Robotics
  • Functional Brain Connectivity Studies
  • Photoreceptor and optogenetics research
  • Evolution and Genetic Dynamics
  • Computational Drug Discovery Methods
  • Natural Language Processing Techniques
  • Quantum Computing Algorithms and Architecture
  • Topic Modeling
  • Control Systems and Identification
  • Domain Adaptation and Few-Shot Learning
  • Mathematical Biology Tumor Growth
  • Evolutionary Game Theory and Cooperation

University of Massachusetts Amherst
2014-2024

Amherst College
2001-2023

Mohamed bin Zayed University of Artificial Intelligence
2023

Massachusetts Institute of Technology
2000-2021

Defense Advanced Research Projects Agency
2019-2020

University of Virginia
2018

University of Massachusetts Boston
2015

Dynamic Systems (United States)
2013-2014

Harvard University
2008-2010

Rutgers, The State University of New Jersey
1991-2005

We present a novel clustering method using the approach of support vector machines. Data points are mapped by means Gaussian kernel to high dimensional feature space, where we search for minimal enclosing sphere. This sphere, when back data can separate into several components, each cluster points. simple algorithm identifying these clusters. The width controls scale at which is probed while soft margin constant helps coping with outliers and overlapping structure dataset explored varying...

10.5555/944790.944807 article EN Journal of Machine Learning Research 2002-03-01

10.1006/jcss.1995.1013 article EN publisher-specific-oa Journal of Computer and System Sciences 1995-02-01

Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful Turing machines. This work focuses on another network which is popular in control applications and has found very effective at learning a variety of problems. These are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), therefore called NARX networks. As opposed other networks, limited feedback comes only from the output neuron rather than hidden...

10.1109/3477.558801 article EN IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics) 1997-04-01

10.1016/0893-9659(91)90080-f article EN Applied Mathematics Letters 1991-01-01

Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is reactivation of neuronal activity patterns representing those memories. artificial networks, such memory replay can implemented as 'generative replay', which successfully - and surprisingly efficiently prevent forgetting toy examples even in...

10.1038/s41467-020-17866-2 article EN cc-by Nature Communications 2020-08-13

The development of spiking neural network simulation software is a critical component enabling the modeling systems and biologically inspired algorithms. Existing frameworks support wide range functionality, abstraction levels, hardware devices, yet are typically not suitable for rapid prototyping or application to problems in domain machine learning. In this paper, we describe new Python package networks, specifically geared towards learning reinforcement Our software, called...

10.3389/fninf.2018.00089 article EN cc-by Frontiers in Neuroinformatics 2018-12-12

10.1016/0304-3975(94)90178-3 article EN publisher-specific-oa Theoretical Computer Science 1994-09-01

Extensive efforts have been made to prove the Church-Turing thesis, which suggests that all realizable dynamical and physical systems cannot be more powerful than classical models of computation. A simply described but highly chaotic system called analog shift map is presented here, has computational power beyond Turing limit (super-Turing); it computes exactly like neural networks machines. This conjectured describe natural phenomena.

10.1126/science.268.5210.545 article EN Science 1995-04-28

This paper deals with finite networks which consist of interconnections synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" scalar nonlinearity to linear combination the previous states all units. We prove that one may simulate Turing Machines rational nets. In particular, can do this in time, and there is net made up about 1,000 processors computes universal partial-recursive function. Products (high order nets) are not required, contrary what had...

10.1145/130385.130432 article EN 1992-07-01

Abstract Though widely hypothesized, limited evidence exists that human brain functions organize in global gradients of abstraction starting from sensory cortical inputs. Hierarchical representation is accepted computational networks and tentatively visual neuroscience, yet no direct holistic demonstrations exist vivo . Our methods developed network models enriched with tiered directionality, by including input locations, a critical feature for localizing generally. Grouped primary cortices...

10.1038/srep18112 article EN cc-by Scientific Reports 2015-12-16

Replay is the reactivation of one or more neural patterns that are similar to activation experienced during past waking experiences. was first observed in biological networks sleep, and it now thought play a critical role memory formation, retrieval, consolidation. Replay-like mechanisms have been incorporated deep artificial learn over time avoid catastrophic forgetting previous knowledge. algorithms successfully used wide range learning methods within supervised, unsupervised,...

10.1162/neco_a_01433 article EN Neural Computation 2021-08-30

We present a novel kernel method for data clustering using description of the by support vectors. The reflects projection points from space to high dimensional feature space. Cluster boundaries are defined as spheres in space, which represent complex geometric shapes utilize this representation construct simple algorithm.

10.1109/icpr.2000.906177 article EN 2002-11-11

10.1006/inco.1996.0062 article EN publisher-specific-oa Information and Computation 1996-07-01

We study the computational capabilities of a biologically inspired neural model where synaptic weights, connectivity pattern, and number neurons can evolve over time rather than stay static. Our focuses on mere concept plasticity so that nature updates is assumed to be not constrained. In this context, we show so-called plastic recurrent networks (RNNs) are capable precise super-Turing power--as static analog networks--irrespective whether their weights modeled by rational or real numbers,...

10.1142/s0129065714500294 article EN International Journal of Neural Systems 2014-09-16

We present a system comprising hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many the features SOMs. Networks are trained in an unsupervised manner to learn lattice filters via excitatory-inhibitory interactions among populations neurons. develop and test various inhibition strategies, such as growing inter-neuron distance two distinct levels inhibition. The quality learning algorithm is evaluated using examples known labels. Several...

10.1109/ijcnn.2018.8489673 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2018-07-01

Significance The prefrontal cortex (PFC) enables humans’ ability to flexibly adapt new environments and circumstances. Disruption of this is often a hallmark disease. Neural network models have provided tools study how the PFC stores uses information, yet mechanisms underlying able learn about situations without disrupting preexisting knowledge remain unknown. We use neural architecture show hierarchical gating can naturally support adaptive learning while preserving memories from prior...

10.1073/pnas.2009591117 article EN Proceedings of the National Academy of Sciences 2020-11-05

The computational power of recurrent neural networks is shown to depend ultimately on the complexity real constants (weights) network. complexity, or information contents, weights measured by a variant resource-bounded Kolmogorov (1965) taking into account time required for constructing numbers. In particular, we reveal full and proper hierarchy nonuniform classes associated with having increasing complexity.

10.1109/18.605580 article EN IEEE Transactions on Information Theory 1997-07-01

We analyze a class of ordinary differential equations representing simplified model genetic network. In this network, the genes control production rates other by logical function. The dynamics in these are represented directed graph on an n-dimensional hypercube (n-cube) which each edge is unique orientation. vertices n-cube correspond to orthants state space, and edges boundaries between adjacent orthants. can be symbolically. Starting from point boundary neighboring orthants, equation...

10.1063/1.1336498 article EN Chaos An Interdisciplinary Journal of Nonlinear Science 2001-03-01
Coming Soon ...