Christian Pehle

ORCID: 0000-0002-5447-0716
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • Neural Networks and Reservoir Computing
  • Ferroelectric and Negative Capacitance Devices
  • Neural dynamics and brain function
  • Neural Networks and Applications
  • Quantum Computing Algorithms and Architecture
  • EEG and Brain-Computer Interfaces
  • CCD and CMOS Imaging Sensors
  • Black Holes and Theoretical Physics
  • Nonlinear Waves and Solitons
  • Quantum Information and Cryptography
  • Functional Brain Connectivity Studies
  • Neuroscience and Neural Engineering
  • Quantum Chromodynamics and Particle Interactions
  • Neurobiology and Insect Physiology Research
  • Context-Aware Activity Recognition Systems
  • IoT and Edge/Fog Computing
  • Brain Tumor Detection and Classification
  • Algebraic Geometry and Number Theory

Heidelberg University
2018-2025

Cold Spring Harbor Laboratory
2023-2024

Kirchhoff (Germany)
2018-2023

Institute for Physics
2019-2020

Since the beginning of information processing by electronic components, nervous system has served as a metaphor for organization computational primitives. Brain-inspired computing today encompasses class approaches ranging from using novel nano-devices computation to research into large-scale neuromorphic architectures, such TrueNorth, SpiNNaker, BrainScaleS, Tianjic, and Loihi. While implementation details differ, spiking neural networks-sometimes referred third generation networks-are...

10.3389/fnins.2022.795876 article EN cc-by Frontiers in Neuroscience 2022-02-24

Significance Neuromorphic systems aim to accomplish efficient computation in electronics by mirroring neurobiological principles. Taking advantage of neuromorphic technologies requires effective learning algorithms capable instantiating high-performing neural networks, while also dealing with inevitable manufacturing variations individual components, such as memristors or analog neurons. We present a framework resulting bioinspired spiking networks high performance, low inference latency,...

10.1073/pnas.2109194119 article EN cc-by Proceedings of the National Academy of Sciences 2022-01-14

Neuromorphic devices represent an attempt to mimic aspects of the brain's architecture and dynamics with aim replicating its hallmark functional capabilities in terms computational power, robust learning energy efficiency. We employ a single-chip prototype BrainScaleS 2 neuromorphic system implement proof-of-concept demonstration reward-modulated spike-timing-dependent plasticity spiking network that learns play simplified version Pong video game by smooth pursuit. This combines electronic...

10.3389/fnins.2019.00260 article EN cc-by Frontiers in Neuroscience 2019-03-26

We present an array of leaky integrate-and-fire (LIF) neuron circuits designed for the second-generation BrainScaleS mixed-signal 65-nm CMOS neuromorphic hardware. The neuronal is embedded in analog network core a scaled-down prototype high input count neural with digital learning system chip. Designed as continuous-time circuits, neurons are highly tunable and reconfigurable elements accelerated dynamics. Each integrates current from multitude incoming synapses evokes spike event output....

10.1109/tcsi.2018.2840718 article EN IEEE Transactions on Circuits and Systems I Regular Papers 2018-06-27

Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary developmental processes work well on range computing tasks. Occasionally this process has been emulated genetic algorithms, but these require themselves hand-design their details tend provide limited improvements. We employ instead other powerful gradient-free optimization tools,...

10.3389/fnins.2019.00483 article EN cc-by Frontiers in Neuroscience 2019-05-21

The field of neuromorphic computing holds great promise in terms advancing efficiency and capabilities by following brain-inspired principles. However, the rich diversity techniques employed research has resulted a lack clear standards for benchmarking, hindering effective evaluation advantages strengths methods compared to traditional deep-learning-based methods. This paper presents collaborative effort, bringing together members from academia industry, define benchmarks computing:...

10.48550/arxiv.2304.04640 preprint EN cc-by arXiv (Cornell University) 2023-01-01

We present first experimental results on the novel BrainScaleS-2 neuromorphic architecture based an analog neuro-synaptic core and augmented by embedded microprocessors for complex plasticity experiment control. The high acceleration factor of 1000 compared to biological dynamics enables execution computationally expensive tasks, allowing fast emulation long-duration experiments or rapid iteration over many consecutive trials. flexibility our is demonstrated in a suite five distinct...

10.1109/iscas45731.2020.9180741 article EN 2022 IEEE International Symposium on Circuits and Systems (ISCAS) 2020-09-29

Abstract Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of deep learning are enabled by training non-spiking artificial backpropagation algorithm, applying this algorithm to spiking was previously hindered existence spike events and discontinuities. For first time, work derives for a continuous-time network general loss function adjoint method together proper partial derivative jumps, allowing through...

10.1038/s41598-021-91786-z article EN cc-by Scientific Reports 2021-06-18

Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging unite efficiency and usability. This work presents software aspects of this endeavor BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key Operating System: experiment workflow, API layering, design, platform operation. present use cases discuss derive requirements showcase implementation....

10.3389/fnins.2022.884128 article EN cc-by Frontiers in Neuroscience 2022-05-18

We propose a method to compute the exact number of charged localized massless matter states in an F-theory compactification on Calabi-Yau 4-fold with non-trivial 3-form data. Our starting point is description data via Deligne cohomology. A refined cycle map allows us specify concrete elements therein terms second Chow group 4-fold, i.e. rational equivalence classes algebraic 2-cycles. use intersection theory within ring extract from this line bundle class curves base fibration which...

10.48550/arxiv.1402.5144 preprint EN other-oa arXiv (Cornell University) 2014-01-01

Traditional neuromorphic hardware architectures rely on event-driven computation, where the asynchronous transmission of events, such as spikes, triggers local computations within synapses and neurons. While machine learning frameworks are commonly used for gradient-based training, their emphasis dense data structures poses challenges processing spike trains. This problem is particularly pronounced typical tensor structures. In this context, we present a novel library <sup...

10.1109/nice61972.2024.10548709 article EN 2024-04-23

Quantum computation builds on the use of correlations. Correlations could also play a central role for artificial intelligence, neuromorphic computing or ``biological computing.'' As step toward systematic exploration ``correlated computing'' we demonstrate that can perform quantum operations. Spiking neurons in active silent states are connected to two Ising spins. A density matrix is constructed from expectation values and correlations We show qubit system gates be learned as change...

10.1103/physreve.106.045311 article EN Physical review. E 2022-10-31

Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address need by presenting our development a machine learning-based modeling framework for BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on matrix-multiplication mode or lacked full automation. Our framework, called hxtorch.snn, enables hardware-in-the-loop training spiking neural networks within...

10.1145/3584954.3584993 article EN cc-by-sa 2023-04-11

Neuromorphic computing aims to incorporate lessons from studying biological nervous systems in the design of computer architectures. While existing approaches have successfully implemented aspects those computational principles, such as sparse spike-based computation, event-based scalable learning has remained an elusive goal large-scale systems. However, only then potential energy-efficiency advantages neuromorphic relative other hardware architectures can be realized during learning. We...

10.48550/arxiv.2302.07141 preprint EN other-oa arXiv (Cornell University) 2023-01-01

The evolution of biological brains has always been contingent on their embodiment within respective environments, in which survival required appropriate navigation and manipulation skills. Studying such interactions thus represents an important aspect computational neuroscience and, by extension, a topic interest for neuromorphic engineering. Here, we present three examples the BrainScaleS-2 architecture, dynamical timescales both agents environment are accelerated several orders magnitude...

10.1145/3381755.3381776 preprint EN 2020-03-17

Bees display the remarkable ability to return home in a straight line after meandering excursions their environment. Neurobiological imaging studies have revealed that this capability emerges from path integration mechanism implemented within insect's brain. In present work, we emulate neural network on neuromorphic mixed-signal processor BrainScaleS-2 guide bees, virtually embodied digital co-processor, back location randomly exploring To realize underlying integrators, introduce...

10.48550/arxiv.2401.00473 preprint EN other-oa arXiv (Cornell University) 2024-01-01

Traditional neuromorphic hardware architectures rely on event-driven computation, where the asynchronous transmission of events, such as spikes, triggers local computations within synapses and neurons. While machine learning frameworks are commonly used for gradient-based training, their emphasis dense data structures poses challenges processing spike trains. This problem is particularly pronounced typical tensor structures. In this context, we present a novel library (jaxsnn) built top JAX,...

10.48550/arxiv.2401.16841 preprint EN arXiv (Cornell University) 2024-01-30

A natural strategy for continual learning is to weigh a Bayesian ensemble of fixed functions. This suggests that if (single) neural network could be interpreted as an ensemble, one design effective algorithms learn without forgetting. To realize this possibility, we observe classifier with N parameters can weighted classifiers, and in the lazy regime limit these classifiers are throughout learning. We term tangent experts show they output valid probability distributions over labels. then...

10.48550/arxiv.2408.17394 preprint EN arXiv (Cornell University) 2024-08-30
Coming Soon ...