Friedrich T. Sommer

ORCID: 0000-0002-6738-9263
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Neural dynamics and brain function
  • Advanced Memory and Neural Computing
  • Neural Networks and Applications
  • Neural Networks and Reservoir Computing
  • Functional Brain Connectivity Studies
  • Ferroelectric and Negative Capacitance Devices
  • Physics and Engineering Research Articles
  • Neuroscience and Neuropharmacology Research
  • Visual perception and processing mechanisms
  • Cell Image Analysis Techniques
  • History and Theory of Mathematics
  • Corporate Governance and Management
  • Photoreceptor and optogenetics research
  • Mathematics and Applications
  • Memory and Neural Mechanisms
  • Algebraic and Geometric Analysis
  • Complex Systems and Time Series Analysis
  • Blind Source Separation Techniques
  • Sparse and Compressive Sensing Techniques
  • Cellular Automata and Applications
  • Generative Adversarial Networks and Image Synthesis
  • CCD and CMOS Imaging Sensors
  • EEG and Brain-Computer Interfaces
  • Neuroscience and Neural Engineering
  • stochastic dynamics and bifurcation

University of California, Berkeley
2016-2025

Center for Theoretical Biological Physics
2008-2025

Intel (United States)
2020-2024

Los Angeles Mission College
2021-2023

Neuroscience Institute
2023

University of California, San Francisco
2020

Berkeley College
2018

University of Münster
1955-2012

University of California System
2007

Universität Ulm
1996-2003

Chaos, or exponential sensitivity to small perturbations, appears everywhere in nature. Moreover, chaos is predicted play diverse functional roles living systems. A method for detecting from empirical measurements should therefore be a key component of the biologist's toolkit. But, classic chaos-detection tools are highly sensitive measurement noise and break down common edge cases, making it difficult detect domains, like biology, where noisy. However, newer promise overcome these...

10.1038/s42003-019-0715-9 article EN cc-by Communications Biology 2020-01-03

The biologically inspired spiking neurons used in neuromorphic computing are nonlinear filters with dynamic state variables—very different from the stateless neuron models deep learning. next version of Intel's research processor, Loihi 2, supports a wide range stateful fully programmable dynamics. Here we showcase advanced that can be to efficiently process streaming data simulation experiments on emulated 2 hardware. In one example, Resonate-and-Fire (RF) compute Short Time Fourier...

10.1109/sips52927.2021.00053 article EN 2021-10-01

Significance What changes in the brain when we lose consciousness? One possibility is that loss of consciousness corresponds to a transition brain’s electric activity away from edge-of-chaos criticality, or knife’s edge between stability and chaos. Recent mathematical developments have produced tools for testing this hypothesis, which apply cortical recordings diverse states. We show cortex indeed poised near boundary chaos during conscious states transitions unconsciousness disrupts...

10.1073/pnas.2024455119 article EN cc-by-nc-nd Proceedings of the National Academy of Sciences 2022-02-10

Although neuronal spikes can be readily detected from extracellular recordings, synaptic and subthreshold activity remains undifferentiated within the local field potential (LFP). In hippocampus, neurons discharge selectively when rat is at certain locations, while LFPs single anatomical sites exhibit no such place-tuning. Nonetheless, because representation of position sparse distributed, we hypothesized that spatial information recovered multiple-site LFP recordings. Using high-density...

10.1126/science.1250444 article EN Science 2014-05-08

To accommodate structured approaches of neural computation, we propose a class recurrent networks for indexing and storing sequences symbols or analog data vectors. These with randomized input weights orthogonal implement coding principles previously described in vector symbolic architectures (VSA) leverage properties reservoir computing. In general, the storage computing is lossy, crosstalk noise limits retrieval accuracy information capacity. A novel theory to optimize memory performance...

10.1162/neco_a_01084 article EN Neural Computation 2018-04-13

A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., implementations of networks interacting binary spin variables. Most machines leverage second-order interactions although important classes problems, such as satisfiability map more seamlessly with higher-order interactions. Here, we demonstrate that can solve resource-efficiently in terms the number variables and their connections when compared traditional machines. Further, our...

10.1038/s41467-023-41214-9 article EN cc-by Nature Communications 2023-09-27

Discovering the structure underlying observed data is a recurring problem in machine learning with important applications neuroscience. It also primary function of brain. When can be actively collected context closed action-perception loop, behavior becomes critical determinant efficiency. Psychologists studying exploration and curiosity humans animals have long argued that itself motivator behavior. However, theoretical basis learning-driven not well understood. Previous computational...

10.3389/fncir.2013.00037 article EN cc-by Frontiers in Neural Circuits 2013-01-01

Significance This work makes 2 contributions. First, we present a neural network model of associative memory that stores and retrieves sparse patterns complex variables. can store analog information as fixed-point attractors in the domain; it is governed by an energy function has increased capacity compared to early models. Second, translate attractor networks into spiking networks, where timing spike indicates phase number. We show fixed points correspond stable periodic patterns. It...

10.1073/pnas.1902653116 article EN cc-by-nc-nd Proceedings of the National Academy of Sciences 2019-08-20

An outstanding problem in neuroscience is to understand how information integrated across the many modules of brain. While classic information-theoretic measures have transformed our understanding feedforward processing brain's sensory periphery, comparable for flow massively recurrent networks rest brain been lacking. To address this, recent work theory has produced a sound measure network-wide "integrated information", which can be estimated from time-series data. But, computational hurdle...

10.1371/journal.pcbi.1006807 article EN cc-by PLoS Computational Biology 2019-02-07

ABSTRACT Neurodata Without Borders: Neurophysiology (NWB:N) is a data standard for neurophysiology, providing neuroscientists with common to share, archive, use, and build analysis tools neurophysiology data. With NWB:N version 2.0 (NWB:N 2.0) we made significant advances towards creating usable standard, software ecosystem, vibrant community standardizing In this manuscript focus in particular on the schema present an accessible neurophysiology.

10.1101/523035 preprint EN bioRxiv (Cold Spring Harbor Laboratory) 2019-01-17

Variable binding is a cornerstone of symbolic reasoning and cognition. But how can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, neural network researchers for many decades. One type model that naturally includes operation vector architectures (VSAs). In contrast to other proposals variable binding, the VSAs dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding combinatoric...

10.1109/tnnls.2021.3105949 article EN IEEE Transactions on Neural Networks and Learning Systems 2021-09-03

Abstract In this perspective article, we consider the critical issue of data and other research object standardisation and, specifically, how international collaboration, organizations such as International Neuroinformatics Coordinating Facility (INCF) can encourage that emerging neuroscience be Findable, Accessible, Interoperable, Reusable (FAIR) . As neuroscientists engaged in sharing integration multi-modal multiscale data, see current insufficiency standards a major impediment...

10.1007/s12021-021-09557-0 article EN cc-by Neuroinformatics 2022-01-21

Vector space models for symbolic processing that encode symbols by random vectors have been proposed in cognitive science and connectionist communities under the names Symbolic Architecture (VSA), and, synonymously, Hyperdimensional (HD) computing [22, 31, 46]. In this paper, we generalize VSAs to function spaces mapping continuous-valued data into a vector such inner product between representations of any two points approximately represents similarity kernel. By analogy VSA, call new...

10.1145/3517343.3522597 article EN 2022-03-28
Coming Soon ...