Simone Carlo Surace

ORCID: 0000-0001-9321-6481
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Target Tracking and Data Fusion in Sensor Networks
  • Neural dynamics and brain function
  • Water Systems and Optimization
  • Gaussian Processes and Bayesian Inference
  • Neural Networks and Applications
  • Markov Chains and Monte Carlo Methods
  • Hydrological Forecasting Using AI
  • Stochastic processes and financial applications
  • Hydrology and Drought Analysis
  • Functional Brain Connectivity Studies
  • Axon Guidance and Neuronal Signaling
  • Meteorological Phenomena and Simulations
  • Neuroscience and Neural Engineering
  • Receptor Mechanisms and Signaling
  • stochastic dynamics and bifurcation
  • Medical Imaging Techniques and Applications
  • Stochastic Gradient Optimization Techniques
  • Adversarial Robustness in Machine Learning
  • Memory and Neural Mechanisms
  • Model Reduction and Neural Networks
  • Statistical Mechanics and Entropy
  • Neural Networks and Reservoir Computing
  • Blind Source Separation Techniques
  • Visual perception and processing mechanisms
  • Statistical Methods and Bayesian Inference

University of Bern
2014-2023

ETH Zurich
2014-2022

SIB Swiss Institute of Bioinformatics
2014-2019

University of Zurich
2014-2017

Particle filters are a popular and flexible class of numerical algorithms to solve large nonlinear filtering problems. However, standard particle with importance weights have been shown require sample size that increases exponentially the dimension $D$ state space in order achieve certain performance, which precludes their use very high-dimensional Here, we focus on dynamic aspect this "curse dimensionality" (COD) continuous-time filtering, is caused by degeneracy over time. We show occurs...

10.1137/17m1125340 article EN cc-by SIAM Review 2019-01-01

Abstract The robust estimation of dynamical hidden features, such as the position prey, based on sensory inputs is one hallmarks perception. This can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals’ performance in many tasks consistent with a statistical interpretation. However, it presently unclear how filter efficiently implemented network neurons satisfies some minimum constraints biological plausibility....

10.1038/s41598-017-06519-y article EN cc-by Scientific Reports 2017-08-14

Most normative models in computational neuroscience describe the task of learning as optimisation a cost function with respect to set parameters. However, fails account for time-varying environment during process and resulting point estimate parameter space does not uncertainty. Here, we frame filtering, i.e., principled method including time We derive filtering-based rule spiking neuronal network—the Synaptic Filter—and show its biological relevance. For relevance, that filtering improves...

10.1371/journal.pcbi.1009721 article EN cc-by PLoS Computational Biology 2022-02-23

We revisit the problem of estimating parameters a partially observed diffusion process, consisting hidden state process and an with continuous time parameter. The estimation is to be done online, i.e., parameter estimate should updated recursively based on observation filtration. provide theoretical analysis stochastic gradient ascent algorithm incomplete-data log-likelihood. convergence proved under suitable conditions regarding ergodicity filter, tangent filter. Additionally, our shown...

10.1109/tac.2018.2880404 article EN IEEE Transactions on Automatic Control 2018-11-26

Nonlinear filtering is used in online estimation of a dynamic hidden variable from incoming data and has vast applications different fields, ranging engineering, machine learning, economic science natural sciences. We start our review the theory on nonlinear simplest 'filtering' task we can think of, namely static Bayesian inference. From there continue journey through discrete-time models, which are usually encountered generalize to continuous-time theory. The idea changing probability...

10.1016/j.jmp.2019.102307 article EN cc-by-nc-nd Journal of Mathematical Psychology 2019-12-27

Being able to model uncertainty is a vital property for any intelligent agent. In an environment in which the domain of input stimuli fully controlled neglecting may work, but this usually does not hold true real-world scenario. This highlights necessity learning algorithms that robustly detect noisy and out-of-distribution examples. Here we propose novel approach estimation based on adversarially trained hypernetworks. We define weight posterior uniformly allow realizations neural network...

10.5167/uzh-168578 article EN 2018-12-07

This is a PLOS Computational Biology Education paper. The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Because cost function by itself does not predict how finds its minima, additional assumptions about optimization method need be made dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) often suggested an algorithmic principle potentially implemented brain. practice, researchers consider...

10.1371/journal.pcbi.1007640 article EN cc-by PLoS Computational Biology 2020-04-09

Bayesian Active Learning (BAL) is an efficient framework for learning the parameters of a model, in which input stimuli are selected to maximize mutual information between observations and unknown parameters. However, applicability BAL experiments limited as it requires performing high-dimensional integrations optimizations real time. Current methods either too time consuming, or only applicable specific models. Here, we propose Efficient Sampling-Based (ESB-BAL) framework, enough be used...

10.1371/journal.pcbi.1011342 article EN cc-by PLoS Computational Biology 2023-08-21

The filtering of a Markov diffusion process on manifold from counting observations leads to `large' changes in the conditional distribution upon an observed event, corresponding multiplication density by intensity function observation process. If that is represented unweighted samples or particles, they need be jointly transformed such sample modified distribution. In previous work, this transformation has been approximated translation all particles common vector. However, operation...

10.1109/lcsys.2019.2951093 article EN cc-by IEEE Control Systems Letters 2019-11-05

We revisit the problem of estimating parameters a partially observed diffusion process, consisting hidden state process and an with continuous time parameter. The estimation is to be done online, i.e. parameter estimate should updated recursively based on observation filtration. provide theoretical analysis stochastic gradient ascent algorithm incomplete-data log-likelihood. convergence proved under suitable conditions regarding ergodicity state, filter, tangent filter. Additionally, our...

10.5167/uzh-149411 preprint EN arXiv (Cornell University) 2016-11-01

Particle filters are a popular and flexible class of numerical algorithms to solve large nonlinear filtering problems. However, standard particle with importance weights have been shown require sample size that increases exponentially the dimension D state space in order achieve certain performance, which precludes their use very high-dimensional Here, we focus on dynamic aspect this curse dimensionality (COD) continuous time filtering, is caused by degeneracy over time. We show occurs...

10.48550/arxiv.1703.07879 preprint EN other-oa arXiv (Cornell University) 2017-01-01

Feedback particle filters (FPFs) are Monte-Carlo approximations of the solution filtering problem in continuous time. The samples or particles evolve according to a feedback control law order track posterior distribution. However, it is known that by itself, requirement does not lead unique algorithm. Given filter, another one can be constructed applying time-dependent transformation keeps distribution invariant. Here, we characterize this gauge freedom within class FPFs for linear-Gaussian...

10.1109/cdc40024.2019.9029897 article EN 2019-12-01

Single neuron models have a long tradition in computational neuroscience. Detailed biophysical such as the Hodgkin-Huxley model well simplified class of integrate-and-fire relate input current to membrane potential neuron. Those types been extensively fitted vitro data where is controlled. are however little use when it comes characterize intracellular vivo recordings since not known. Here we propose novel single that characterizes statistical properties recordings. More specifically,...

10.1371/journal.pone.0142435 article EN cc-by PLoS ONE 2015-11-16

Figure 1 Left: A sample trajectory of the real hidden state and its filtered estimate, showing ability neural filter to infer variable.Right: For a nonlinear dynamics, neuronal we propose achieves an estimation error which is comparable that particle or extended Kalman (EKF).The worst corresponds our with suboptimal parameter choice.

10.1186/1471-2202-16-s1-p196 article EN cc-by BMC Neuroscience 2015-12-01

Particle filters are a popular and flexible class of numerical algorithms to solve large nonlinear filtering problems. However, standard particle with importance weights have been shown require sample size that increases exponentially the dimension D state space in order achieve certain performance, which precludes their use very high-dimensional Here, we focus on dynamic aspect this curse dimensionality (COD) continuous time filtering, is caused by degeneracy over time. We show occurs...

10.5167/uzh-149410 preprint EN arXiv (Cornell University) 2017-03-22

Particle filters (PFs), which are successful methods for approximating the solution of filtering problem, can be divided into two types: weighted and unweighted PFs. It is well known that PFs suffer from weight degeneracy curse dimensionality. To sidestep these issues, have been gaining attention, though they their own challenges. The existing literature on types based distinct approaches. In order to establish a connection, we put forward framework unifies in continuous-time problem. We...

10.1137/20m1382404 article EN cc-by SIAM Journal on Control and Optimization 2022-03-01

Cortical neurons are constantly active. Even in the absence of an explicit stimulus, cortical spontaneously active and display large fluctuations their membrane potentials. The increasing amount intracellular recordings spontaneous activity as well number theories which critically rely on a characterization calls for proper quantification dynamics. Here we propose statistical model is very flexible remains tractable. More specifically, doubly stochastic process where subthreshold potential...

10.5167/uzh-107851 article EN 2014-01-01

We revisit the problem of estimating parameters a partially observed diffusion process, consisting hidden state process and an with continuous time parameter. The estimation is to be done online, i.e. parameter estimate should updated recursively based on observation filtration. provide theoretical analysis stochastic gradient ascent algorithm incomplete-data log-likelihood. convergence proved under suitable conditions regarding ergodicity state, filter, tangent filter. Additionally, our...

10.48550/arxiv.1611.00170 preprint EN other-oa arXiv (Cornell University) 2016-01-01
Coming Soon ...