- Neural dynamics and brain function
- stochastic dynamics and bifurcation
- Photoreceptor and optogenetics research
- Machine Learning in Materials Science
- Neural Networks and Applications
- Theoretical and Computational Physics
- Human Pose and Action Recognition
- Reinforcement Learning in Robotics
- Neuroscience and Neural Engineering
- Advanced Memory and Neural Computing
- Nonlinear Dynamics and Pattern Formation
- Machine Learning and Algorithms
- Protein Structure and Dynamics
- Advanced Graph Neural Networks
- Diffusion and Search Dynamics
- Olfactory and Sensory Function Studies
- Memory and Neural Mechanisms
- Explainable Artificial Intelligence (XAI)
- Neural Networks Stability and Synchronization
- Educational Games and Gamification
- Artificial Intelligence in Games
- Ecosystem dynamics and resilience
- Material Dynamics and Properties
- Neural Networks and Reservoir Computing
- AI-based Problem Solving and Planning
DeepMind (United Kingdom)
2020-2023
Universidad Rey Juan Carlos
2004-2018
New York University
2010-2011
To solve complex real-world problems with reinforcement learning, we cannot rely on manually specified reward functions. Instead, can have humans communicate an objective to the agent directly. In this work, combine two approaches learning from human feedback: expert demonstrations and trajectory preferences. We train a deep neural network model function use its predicted DQN-based 9 Atari games. Our approach beats imitation baseline in 7 games achieves strictly superhuman performance 2...
Abstract We present a machine-learning approach, based on normalizing flows, for modelling atomic solids. Our model transforms an analytically tractable base distribution into the target solid without requiring ground-truth samples training. report Helmholtz free energy estimates cubic and hexagonal ice modelled as monatomic water well truncated shifted Lennard-Jones system, find them to be in excellent agreement with literature values from established baseline methods. further investigate...
Experiments in various neural systems found avalanches: bursts of activity with characteristics typical for critical dynamics. A possible explanation their occurrence is an underlying network that self-organizes into a state. We propose simple spiking model developing networks, showing how these may "grow into" criticality. Avalanches generated by our correspond to clusters widely applied Hawkes processes. analytically derive the cluster size and duration distributions find they agree those...
In the mammalian brain, allocentric representations support efficient self-location and flexible navigation. A number of distinct populations these spatial responses have been identified but no unified function has shown to account for their emergence. Here we developed a network, trained with simple predictive objective, that was capable mapping egocentric information into an reference frame. The prediction visual inputs sufficient drive appearance resembling those observed in rodents: head...
The present paper studies regular and complex spatiotemporal behaviors in networks of coupled map-based bursting oscillators. In-phase antiphase synchronization bursts are studied, explaining their underlying mechanisms order to determine how network parameters separate them. Conditions for emergent the system derived from our analysis. In region emergence, patterns chaotic transitions between propagation found. We show that they consist transient standing rotating waves induced by...
A system consisting of two map-based neurons coupled through reciprocal excitatory or inhibitory chemical synapses is discussed. After a brief explanation the basic mechanism behind generation and synchronization bursts, parameter space explored to determine less obvious but biologically meaningful regimes effects. Among them, we show how without any delays may induce antiphase synchronization; that synapse change its character from vice versa by changing conductance, in reversal potential;...
Abstract We present a machine-learning model based on normalizing flows that is trained to sample from the isobaric-isothermal ensemble. In our approach, we approximate joint distribution of fully-flexible triclinic simulation box and particle coordinates achieve desired internal pressure. This novel extension flow-based sampling ensemble yields direct estimates Gibbs free energies. test NPT -flow monatomic water in cubic hexagonal ice phases find excellent agreement energies other...
Transformers have revolutionized machine learning with their simple yet effective architecture. Pre-training on massive text datasets from the Internet has led to unmatched generalization for natural language understanding (NLU) tasks. However, such models remain fragile when tasked algorithmic forms of reasoning, where computations must be precise and robust. To address this limitation, we propose a novel approach that combines Transformer's robustness graph neural network (GNN)-based...
We study the dynamics of networks inhibitory map-based bursting neurons. Linear analysis allows us to understand how patterns are determined by network topology and they depend on strength synaptic connections, when inhibition is balanced. Two kinds found depending symmetry network: slow cyclic riding subthreshold oscillations where almost all neurons contribute bursts in a sparse manner fast which only one two mutually exclusive groups take part. also discuss properties neuron model that...
We present a machine-learning model based on normalizing flows that is trained to sample from the isobaric-isothermal ensemble. In our approach, we approximate joint distribution of fully-flexible triclinic simulation box and particle coordinates achieve desired internal pressure. This novel extension flow-based sampling ensemble yields direct estimates Gibbs free energies. test NPT-flow monatomic water in cubic hexagonal ice phases find excellent agreement energies other observables...
Through phase plane analysis of a class two-dimensional spiking and bursting neuron models, covering some the most popular map-based we show that there exists trade-off between sensitivity to steady external stimulation its resonance properties, how this may be tuned by neutral or asymptotic character slow variable. Implications results for suprathreshold behavior neurons, both themselves as part networks, are presented in different regimes interest, such excitable, regular spiking, regimes....
The cornerstone of neural algorithmic reasoning is the ability to solve tasks, especially in a way that generalises out distribution. While recent years have seen surge methodological improvements this area, they mostly focused on building specialist models. Specialist models are capable learning neurally execute either only one algorithm or collection algorithms with identical control-flow backbone. Here, instead, we focus constructing generalist learner -- single graph network processor...
The brain can be described as a very complex dynamical system whose autonomous, intrinsic activity is modulated by great variety of external inputs. In the last decade, simulation this using networks neurons has led to development new approaches describe processing information nervous system. review paper we focus on winnerless competition, concept allowing analysis emergent behavior associated with collective synchronization, multi-stability and adaptation — properties which seem at basis...
Ever since the pioneering work of Hodgkin and Huxley, biological neuron models have consisted ODEs representing evolution transmembrane voltage dynamics ionic conductances. It is only recently that maps — or difference equations begun to receive attention as valid conductance models. They can not be computationally advantageous substitutes ODE models, but, they accommodate chaotic in a natural way, may reproduce rich collective behaviors we explore here.
Recent experiments have observed a dynamical state characterized by so-called neural avalanches in different systems, such as networks of cultured neurons [1], the developing retina [2] and neocortex vivo [3]. Neural are bursts activity that power-law size distribution, which suggests system has assumed critical state. To investigate how might develop, we study network growth models were proposed on basis neurobiological [4,5]. In these models, spiking neuron governs outgrowth its processes...
Eliciting reasoning capabilities from language models (LMs) is a critical direction on the path towards building intelligent systems. Most recent studies dedicated to focus out-of-distribution performance procedurally-generated synthetic benchmarks, bespoke-built evaluate specific skills only. This trend makes results hard transfer across publications, slowing down progress. Three years ago, similar issue was identified and rectified in field of neural algorithmic reasoning, with advent CLRS...
Recent years have seen a significant surge in complex AI systems for competitive programming, capable of performing at admirable levels against human competitors. While steady progress has been made, the highest percentiles still remain out reach these methods on standard competition platforms such as Codeforces. Here we instead focus combinatorial where target is to find as-good-as-possible solutions otherwise computationally intractable problems, over specific given inputs. We hypothesise...
Recent work on neural algorithmic reasoning has investigated the capabilities of networks, effectively demonstrating they can learn to execute classical algorithms unseen data coming from train distribution. However, performance existing reasoners significantly degrades out-of-distribution (OOD) test data, where inputs have larger sizes. In this work, we make an important observation: there are many different for which algorithm will perform certain intermediate computations identically....