Amirreza Yousefzadeh

ORCID: 0000-0002-2967-5090
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • CCD and CMOS Imaging Sensors
  • Neural Networks and Reservoir Computing
  • Neuroscience and Neural Engineering
  • Neural dynamics and brain function
  • Ferroelectric and Negative Capacitance Devices
  • Neural Networks and Applications
  • Quantum-Dot Cellular Automata
  • Advanced Neural Network Applications
  • Nuclear reactor physics and engineering
  • Radiation Detection and Scintillator Technologies
  • Low-power high-performance VLSI design
  • Military Defense Systems Analysis
  • Advancements in Semiconductor Devices and Circuit Design
  • Photoreceptor and optogenetics research
  • Evolutionary Algorithms and Applications
  • EEG and Brain-Computer Interfaces
  • Drilling and Well Engineering
  • Metaheuristic Optimization Algorithms Research
  • Human Pose and Action Recognition
  • Seismology and Earthquake Studies
  • Quantum Computing Algorithms and Architecture
  • Seismic Imaging and Inversion Techniques
  • Time Series Analysis and Forecasting
  • Infrared Target Detection Methodologies

University of Twente
2024-2025

Imec the Netherlands
2021-2025

Mathys (Netherlands)
2020

Ghent University
2019

Instituto de Microelectrónica de Sevilla
2016-2018

Universidad de Sevilla
2015-2018

AGH University of Krakow
2017

Consejo Superior de Investigaciones Científicas
2016

The development of brain-inspired neuromorphic computing architectures as a paradigm for Artificial Intelligence (AI) at the edge is candidate solution that can meet strict energy and cost reduction constraints in Internet Things (IoT) application areas. Toward this goal, we present μBrain: first digital yet fully event-driven without clock architecture, with co-located memory processing capability exploits event-based to reduce an always-on system's overall consumption (μW dynamic...

10.3389/fnins.2021.664208 article EN cc-by Frontiers in Neuroscience 2021-05-19

Smart computing on edge-devices has demonstrated huge potential for various application sectors such as personalized healthcare and smart robotics. These devices aim at bringing close to the source where data is generated or stored, while coping with stringent resource budget of edge platforms. The conventional Von-Neumann architecture fails meet these requirements due limitations e.g., memory-processor transfer bottleneck. Memristor-based Computation-In-Memory (CIM) realize data-dominated...

10.1016/j.memori.2023.100025 article EN cc-by-nc-nd Memories - Materials Devices Circuits and Systems 2023-01-21

In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, used not only penalizes directly required memory resources, but also computing, communication, and energy resources. When it comes to engineering, a key question is always find minimum number of necessary bits keep neurocomputational system working satisfactorily. Here we present some...

10.3389/fnins.2018.00665 article EN cc-by Frontiers in Neuroscience 2018-10-15

The field of neuromorphic computing holds great promise in terms advancing efficiency and capabilities by following brain-inspired principles. However, the rich diversity techniques employed research has resulted a lack clear standards for benchmarking, hindering effective evaluation advantages strengths methods compared to traditional deep-learning-based methods. This paper presents collaborative effort, bringing together members from academia industry, define benchmarks computing:...

10.48550/arxiv.2304.04640 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Synaptic delay parameterization of neural network models have remained largely unexplored but recent literature has been showing promising results, suggesting the parameterized are simpler, smaller, sparser, and thus more energy efficient than similar performing (e.g. task accuracy) non-delay ones. We introduce Shared Circular Delay Queue (SCDQ), a novel hardware structure for supporting synaptic delays on digital neuromorphic accelerators. Our analysis results show that it scales better in...

10.48550/arxiv.2501.13610 preprint EN arXiv (Cornell University) 2025-01-23

The recent rise of Large Language Models (LLMs) has revolutionized the deep learning field. However, desire to deploy LLMs on edge devices introduces energy efficiency and latency challenges. Recurrent LLM (R-LLM) architectures have proven effective in mitigating quadratic complexity self-attention, making them a potential paradigm for computing on-edge neuromorphic processors. In this work, we propose low-cost, training-free algorithm sparsify R-LLMs' activations enhance hardware. Our...

10.48550/arxiv.2501.16337 preprint EN arXiv (Cornell University) 2025-01-09

Neuromorphic processors aim to emulate the biological principles of brain achieve high efficiency with low power consumption. However, lack flexibility in most neuromorphic architecture designs results significant performance loss and inefficient memory usage when mapping various neural network algorithms. This paper proposes SENECA, a digital that balances trade-offs between using hierarchical-controlling system. A SENECA core contains two controllers, flexible controller (RISC-V) an...

10.3389/fnins.2023.1187252 article EN cc-by Frontiers in Neuroscience 2023-06-23

Neuromorphic processors promise low-latency and energy-efficient processing by adopting novel brain-inspired design methodologies. Yet, current neuromorphic solutions still struggle to rival conventional deep learning accelerators' performance area efficiency in practical applications. Event-driven data-flow near/in-memory computing are the two dominant trends of processors. However, there remain challenges reducing overhead event-driven increasing mapping computing, which directly impacts...

10.3389/fnins.2024.1335422 article EN Frontiers in Neuroscience 2024-03-28

Address event representation (AER) is a widely employed asynchronous technique for interchanging "neural spikes" between different hardware elements in neuromorphic systems. Each neuron or cell chip system assigned an address (or ID), which typically communicated through high-speed digital bus, thus time-multiplexing high number of neural connections. Conventional AER links use parallel physical wires together with pair handshaking signals (request and acknowledge). In this paper, we present...

10.1109/tbcas.2017.2717341 article EN IEEE Transactions on Biomedical Circuits and Systems 2017-08-14

We present a highly hardware friendly STDP (Spike Timing Dependent Plasticity) learning rule for training Spiking Convolutional Cores in Unsupervised mode and Fully Connected Classifiers Supervised Mode. Examples are given 2-layer Neural System which learns real time features from visual scenes obtained with spiking DVS (Dynamic Vision Sensor) Cameras.

10.1109/iscas.2017.8050870 article EN 2022 IEEE International Symposium on Circuits and Systems (ISCAS) 2017-05-01

Vision processing with dynamic vision sensors (DVSs) is becoming increasingly popular. This type of a bio-inspired sensor does not record static images. The DVS pixel activity relies on the changes in light intensity. In this paper, we introduce platform for object recognition which installed moving pan-tilt unit closed loop neural network. network trained to recognize objects observed by DVS, while moved emulate micro-saccades. We show that performing more saccades different directions can...

10.1109/tbcas.2018.2834428 article EN IEEE Transactions on Biomedical Circuits and Systems 2018-06-12

Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state art inference engines which are efficient static signals, our brain optimized real-time dynamic processing. We believe one important feature (asynchronous state-full processing) key to its excellence this domain. In work, we show how asynchronous with neurons allows exploitation existing sparsity natural signals. This paper explains three different types...

10.1109/jetcas.2019.2951121 article EN IEEE Journal on Emerging and Selected Topics in Circuits and Systems 2019-11-05

SENeCA is our first RISC-V-based digital neuromorphic processor to accelerate bio-inspired Spiking Neural Networks for extreme edge applications inside or near sensors where ultra-low power and adaptivity features are required. optimized exploit unstructured spatio-temporal sparsity in computations data transfer. It a IP, contains interconnected Neuron Cluster Cores, with instruction set, an Neuromorphic Co-Processor, event-based communication infrastructure. improves state of the art by:...

10.1109/aicas54282.2022.9870025 article EN 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2022-06-13

Biological neurons are known to have sparse and asynchronous communications using spikes. Despite our incomplete understanding of processing strategies the brain, its low energy consumption in fulfilling delicate tasks suggests existence efficient mechanisms. Inspired by these key factors, we introduce SpArNet, a bio-inspired quantization scheme convert pre-trained convolutional neural network spiking network, with aim minimizing computational load for execution on neuromorphic processors....

10.1109/aicas48895.2020.9073827 article EN 2020-04-23

The role of axonal synaptic delays in the efficacy and performance artificial neural networks has been largely unexplored. In step-based analog-valued network models (ANNs), concept is almost absent. their spiking neuroscience-inspired counterparts, there hardly a systematic account effects on model terms accuracy number operations. This paper proposes methodology for accounting training loop deep Spiking Neural Networks (SNNs), intending to efficiently solve machine learning tasks data with...

10.1109/iscas46773.2023.10181778 article EN 2022 IEEE International Symposium on Circuits and Systems (ISCAS) 2023-05-21

Sparse and event-driven spiking neural network (SNN) algorithms are the ideal candidate solution for energy-efficient edge computing. Yet, with growing complexity of SNN algorithms, it isn't easy to properly benchmark optimize their computational cost without hardware in loop. Although digital neuromorphic processors have been widely adopted black-box nature is problematic algorithm-hardware co-optimization. In this work, we open black box processor algorithm designers by presenting neuron...

10.1109/iscas46773.2023.10181505 article EN 2022 IEEE International Symposium on Circuits and Systems (ISCAS) 2023-05-21

Artificial Neural Networks (ANNs) show great performance in several data analysis tasks including visual and auditory applications. However, direct implementation of these algorithms without considering the sparsity requires high processing power, consume vast amounts energy suffer from scalability issues. Inspired by biology, one methods which can reduce power consumption allow neural networks is asynchronous communication means action potentials, so-called spikes. In this work, we use...

10.1109/aicas.2019.8771624 article EN 2019-03-01

Neuronflow is a neuromorphic, many core, data flow architecture that exploits brain-inspired concepts to deliver scalable event-based processing engine for neuron networks in Live AI applications. Its design inspired by brain biology, but not necessarily biologically plausible. The main goal the exploitation of sparsity dramatically reduce latency and power consumption as required sensor at Edge.

10.23919/date48585.2020.9116352 article EN Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015 2020-03-01

We present a novel computing architecture which combines the event-based and compute-in-network principles of neuromorphic with traditional dataflow architecture. The result is fine-grained dynamic system avoids coding issues intrinsic to spiking systems, suitable for both procedural workload deep neural network (DNN) inference. particularly computation sparse CNNs low-latency applications. results from GrAIOne, first chip designed using NeuronFlow architecture, has 200 704 neurons...

10.1109/aicas48895.2020.9073999 article EN 2020-04-24

Interest in event-based vision sensors has proliferated recent years, with innovative technology becoming more accessible to new researchers and highlighting such sensors' potential enable low-latency sensing at low computational cost. These can outperform frame-based regarding data compression, dynamic range, temporal resolution power efficiency. However, available mature processing methods by using Artificial Neural Networks (ANNs) surpass Spiking (SNNs) terms of accuracy recognition. In...

10.1109/iscas.2018.8351562 article EN 2022 IEEE International Symposium on Circuits and Systems (ISCAS) 2018-01-01

This paper describes a digital implementation of parallel and pipelined spiking convolutional neural network (S-ConvNet) core for processing spikes in an event-driven system. Event-driven vision systems use typically as sensor some bio-inspired device, such the popular Dynamic Vision Sensor (DVS). DVS cameras generate related to changes light intensity. In this we present 2D convolution with 128×128 pixels. S-ConvNet is Event-Driven method extract event features from input flow. The nature...

10.1109/ebccsp.2015.7300698 article EN 2015-06-01

Asynchronous handshaken interchip links are very popular among neuromorphic full-custom chips due to their delay-insensitive and high-speed properties. Of special interest those that minimize bit-line transitions for power saving, such as the two-phase non-return-to-zero (NRZ) 2-of-7 protocol used in SpiNNaker chips. Interfacing custom chip field-programmable gate arrays (FPGAs) is always of great interest, so additional functionalities can be experimented exploited producing more versatile...

10.1109/tcsii.2016.2531092 article EN IEEE Transactions on Circuits & Systems II Express Briefs 2016-02-19

Activation sparsity can improve compute efficiency and resource utilization in sparsity-aware neural network accelerators. While spatial sparsification of activations is a popular topic DNN literature, introducing exploiting spatio-temporal much less explored literature. However, it perfect resonance with the trend DNNs, to shift from static-data signal processing (e.g., image processing) stream-data real-time video audio) embedded edge devices. Towards goal temporal sparsity, this paper, we...

10.1109/ijcnn55064.2022.9892578 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2022-07-18
Coming Soon ...