- Advanced Memory and Neural Computing
- CCD and CMOS Imaging Sensors
- Neural Networks and Reservoir Computing
- Neuroscience and Neural Engineering
- Neural dynamics and brain function
- Ferroelectric and Negative Capacitance Devices
- Neural Networks and Applications
- Quantum-Dot Cellular Automata
- Advanced Neural Network Applications
- Nuclear reactor physics and engineering
- Radiation Detection and Scintillator Technologies
- Low-power high-performance VLSI design
- Military Defense Systems Analysis
- Advancements in Semiconductor Devices and Circuit Design
- Photoreceptor and optogenetics research
- Evolutionary Algorithms and Applications
- EEG and Brain-Computer Interfaces
- Drilling and Well Engineering
- Metaheuristic Optimization Algorithms Research
- Human Pose and Action Recognition
- Seismology and Earthquake Studies
- Quantum Computing Algorithms and Architecture
- Seismic Imaging and Inversion Techniques
- Time Series Analysis and Forecasting
- Infrared Target Detection Methodologies
University of Twente
2024-2025
Imec the Netherlands
2021-2025
Mathys (Netherlands)
2020
Ghent University
2019
Instituto de Microelectrónica de Sevilla
2016-2018
Universidad de Sevilla
2015-2018
AGH University of Krakow
2017
Consejo Superior de Investigaciones Científicas
2016
The development of brain-inspired neuromorphic computing architectures as a paradigm for Artificial Intelligence (AI) at the edge is candidate solution that can meet strict energy and cost reduction constraints in Internet Things (IoT) application areas. Toward this goal, we present μBrain: first digital yet fully event-driven without clock architecture, with co-located memory processing capability exploits event-based to reduce an always-on system's overall consumption (μW dynamic...
Smart computing on edge-devices has demonstrated huge potential for various application sectors such as personalized healthcare and smart robotics. These devices aim at bringing close to the source where data is generated or stored, while coping with stringent resource budget of edge platforms. The conventional Von-Neumann architecture fails meet these requirements due limitations e.g., memory-processor transfer bottleneck. Memristor-based Computation-In-Memory (CIM) realize data-dominated...
In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, used not only penalizes directly required memory resources, but also computing, communication, and energy resources. When it comes to engineering, a key question is always find minimum number of necessary bits keep neurocomputational system working satisfactorily. Here we present some...
The field of neuromorphic computing holds great promise in terms advancing efficiency and capabilities by following brain-inspired principles. However, the rich diversity techniques employed research has resulted a lack clear standards for benchmarking, hindering effective evaluation advantages strengths methods compared to traditional deep-learning-based methods. This paper presents collaborative effort, bringing together members from academia industry, define benchmarks computing:...
Synaptic delay parameterization of neural network models have remained largely unexplored but recent literature has been showing promising results, suggesting the parameterized are simpler, smaller, sparser, and thus more energy efficient than similar performing (e.g. task accuracy) non-delay ones. We introduce Shared Circular Delay Queue (SCDQ), a novel hardware structure for supporting synaptic delays on digital neuromorphic accelerators. Our analysis results show that it scales better in...
The recent rise of Large Language Models (LLMs) has revolutionized the deep learning field. However, desire to deploy LLMs on edge devices introduces energy efficiency and latency challenges. Recurrent LLM (R-LLM) architectures have proven effective in mitigating quadratic complexity self-attention, making them a potential paradigm for computing on-edge neuromorphic processors. In this work, we propose low-cost, training-free algorithm sparsify R-LLMs' activations enhance hardware. Our...
Neuromorphic processors aim to emulate the biological principles of brain achieve high efficiency with low power consumption. However, lack flexibility in most neuromorphic architecture designs results significant performance loss and inefficient memory usage when mapping various neural network algorithms. This paper proposes SENECA, a digital that balances trade-offs between using hierarchical-controlling system. A SENECA core contains two controllers, flexible controller (RISC-V) an...
Neuromorphic processors promise low-latency and energy-efficient processing by adopting novel brain-inspired design methodologies. Yet, current neuromorphic solutions still struggle to rival conventional deep learning accelerators' performance area efficiency in practical applications. Event-driven data-flow near/in-memory computing are the two dominant trends of processors. However, there remain challenges reducing overhead event-driven increasing mapping computing, which directly impacts...
Address event representation (AER) is a widely employed asynchronous technique for interchanging "neural spikes" between different hardware elements in neuromorphic systems. Each neuron or cell chip system assigned an address (or ID), which typically communicated through high-speed digital bus, thus time-multiplexing high number of neural connections. Conventional AER links use parallel physical wires together with pair handshaking signals (request and acknowledge). In this paper, we present...
We present a highly hardware friendly STDP (Spike Timing Dependent Plasticity) learning rule for training Spiking Convolutional Cores in Unsupervised mode and Fully Connected Classifiers Supervised Mode. Examples are given 2-layer Neural System which learns real time features from visual scenes obtained with spiking DVS (Dynamic Vision Sensor) Cameras.
Vision processing with dynamic vision sensors (DVSs) is becoming increasingly popular. This type of a bio-inspired sensor does not record static images. The DVS pixel activity relies on the changes in light intensity. In this paper, we introduce platform for object recognition which installed moving pan-tilt unit closed loop neural network. network trained to recognize objects observed by DVS, while moved emulate micro-saccades. We show that performing more saccades different directions can...
Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state art inference engines which are efficient static signals, our brain optimized real-time dynamic processing. We believe one important feature (asynchronous state-full processing) key to its excellence this domain. In work, we show how asynchronous with neurons allows exploitation existing sparsity natural signals. This paper explains three different types...
SENeCA is our first RISC-V-based digital neuromorphic processor to accelerate bio-inspired Spiking Neural Networks for extreme edge applications inside or near sensors where ultra-low power and adaptivity features are required. optimized exploit unstructured spatio-temporal sparsity in computations data transfer. It a IP, contains interconnected Neuron Cluster Cores, with instruction set, an Neuromorphic Co-Processor, event-based communication infrastructure. improves state of the art by:...
Biological neurons are known to have sparse and asynchronous communications using spikes. Despite our incomplete understanding of processing strategies the brain, its low energy consumption in fulfilling delicate tasks suggests existence efficient mechanisms. Inspired by these key factors, we introduce SpArNet, a bio-inspired quantization scheme convert pre-trained convolutional neural network spiking network, with aim minimizing computational load for execution on neuromorphic processors....
The role of axonal synaptic delays in the efficacy and performance artificial neural networks has been largely unexplored. In step-based analog-valued network models (ANNs), concept is almost absent. their spiking neuroscience-inspired counterparts, there hardly a systematic account effects on model terms accuracy number operations. This paper proposes methodology for accounting training loop deep Spiking Neural Networks (SNNs), intending to efficiently solve machine learning tasks data with...
Sparse and event-driven spiking neural network (SNN) algorithms are the ideal candidate solution for energy-efficient edge computing. Yet, with growing complexity of SNN algorithms, it isn't easy to properly benchmark optimize their computational cost without hardware in loop. Although digital neuromorphic processors have been widely adopted black-box nature is problematic algorithm-hardware co-optimization. In this work, we open black box processor algorithm designers by presenting neuron...
Artificial Neural Networks (ANNs) show great performance in several data analysis tasks including visual and auditory applications. However, direct implementation of these algorithms without considering the sparsity requires high processing power, consume vast amounts energy suffer from scalability issues. Inspired by biology, one methods which can reduce power consumption allow neural networks is asynchronous communication means action potentials, so-called spikes. In this work, we use...
Neuronflow is a neuromorphic, many core, data flow architecture that exploits brain-inspired concepts to deliver scalable event-based processing engine for neuron networks in Live AI applications. Its design inspired by brain biology, but not necessarily biologically plausible. The main goal the exploitation of sparsity dramatically reduce latency and power consumption as required sensor at Edge.
We present a novel computing architecture which combines the event-based and compute-in-network principles of neuromorphic with traditional dataflow architecture. The result is fine-grained dynamic system avoids coding issues intrinsic to spiking systems, suitable for both procedural workload deep neural network (DNN) inference. particularly computation sparse CNNs low-latency applications. results from GrAIOne, first chip designed using NeuronFlow architecture, has 200 704 neurons...
Interest in event-based vision sensors has proliferated recent years, with innovative technology becoming more accessible to new researchers and highlighting such sensors' potential enable low-latency sensing at low computational cost. These can outperform frame-based regarding data compression, dynamic range, temporal resolution power efficiency. However, available mature processing methods by using Artificial Neural Networks (ANNs) surpass Spiking (SNNs) terms of accuracy recognition. In...
This paper describes a digital implementation of parallel and pipelined spiking convolutional neural network (S-ConvNet) core for processing spikes in an event-driven system. Event-driven vision systems use typically as sensor some bio-inspired device, such the popular Dynamic Vision Sensor (DVS). DVS cameras generate related to changes light intensity. In this we present 2D convolution with 128×128 pixels. S-ConvNet is Event-Driven method extract event features from input flow. The nature...
Asynchronous handshaken interchip links are very popular among neuromorphic full-custom chips due to their delay-insensitive and high-speed properties. Of special interest those that minimize bit-line transitions for power saving, such as the two-phase non-return-to-zero (NRZ) 2-of-7 protocol used in SpiNNaker chips. Interfacing custom chip field-programmable gate arrays (FPGAs) is always of great interest, so additional functionalities can be experimented exploited producing more versatile...
Activation sparsity can improve compute efficiency and resource utilization in sparsity-aware neural network accelerators. While spatial sparsification of activations is a popular topic DNN literature, introducing exploiting spatio-temporal much less explored literature. However, it perfect resonance with the trend DNNs, to shift from static-data signal processing (e.g., image processing) stream-data real-time video audio) embedded edge devices. Towards goal temporal sparsity, this paper, we...