- Advanced Memory and Neural Computing
- Neural Networks and Reservoir Computing
- Wireless Signal Modulation Classification
- Ferroelectric and Negative Capacitance Devices
- Magnetic properties of thin films
- Neural Networks and Applications
- Neural dynamics and brain function
- Electric Power Systems and Control
- Non-Destructive Testing Techniques
- Quantum-Dot Cellular Automata
- Quantum and electron transport phenomena
- Advanced Research in Systems and Signal Processing
- EEG and Brain-Computer Interfaces
- Energy Harvesting in Wireless Networks
- Advanced Thermodynamics and Statistical Mechanics
- Field-Flow Fractionation Techniques
- Advanced Thermodynamic Systems and Engines
- Muscle activation and electromyography studies
- Millimeter-Wave Propagation and Modeling
- Molecular Communication and Nanonetworks
- Topic Modeling
- Parallel Computing and Optimization Techniques
- Geophysical Methods and Applications
- Advanced Neural Network Applications
- Semiconductor Quantum Structures and Devices
Forschungszentrum Jülich
2024
Université Paris-Saclay
2020-2023
Laboratoire Albert Fert
2020-2023
Centre National de la Recherche Scientifique
2020-2023
RWTH Aachen University
2023
Université Paris-Sud
2021
National Institute of Advanced Industrial Science and Technology
2019
UCLouvain
2019
Spin-torque nano-oscillators can emulate neurons at the nanoscale. Recent works show that non-linearity of their oscillation amplitude be leveraged to achieve waveform classification for an input signal encoded in voltage. Here, we frequency and phase oscillator also used recognize waveforms. For this purpose, phase-lock waveform, which carries information its modulated frequency. In way, considerably decrease amplitude, phase, noise. We method allows classifying sine square waveforms with...
Exploiting the physics of nanoelectronic devices is a major lead for implementing compact, fast, and energy efficient artificial intelligence. In this work, we propose an original road in direction, where assemblies spintronic resonators used as synapses can classify an-alogue radio-frequency signals directly without digitalization. The convert ra-dio-frequency input into direct voltages through spin-diode effect. process, they multiply by synaptic weight, which depends on their resonance...
Abstract Artificial neural networks are a valuable tool for radio-frequency (RF) signal classification in many applications, but the digitization of analog signals and use general purpose hardware non-optimized training make process slow energetically costly. Recent theoretical work has proposed to nano-devices called magnetic tunnel junctions, which exhibit intrinsic RF dynamics, implement multiply accumulate (MAC) operation—a key building block networks—directly using signals. In this...
Magnetic tunnel junctions are nanoscale spintronic devices with microwave generation and detection capabilities. Here we use the rectification effect called "spin-diode" in a magnetic junction to wirelessly detect emission of another auto-oscillatory regime. We show that rectified spin-diode voltage measured at receiving end can be reconstructed from independently auto-oscillation spin diode spectra each junction. Finally adapt auto-oscillator model case spin-torque oscillator accurately...
Abstract Convolutional neural networks (LeCun and Bengio 1998 The Handbook of Brain Theory Neural Networks 255–58; LeCun, Hinton 2015 Nature 521 436–44) are state-of-the-art ubiquitous in modern signal processing machine vision. Nowadays, hardware solutions based on emerging nanodevices designed to reduce the power consumption these networks. This is done either by using devices that implement convolutional filters sequentially multiply consecutive subsets input, or different sets perform...
Abstract Colloidal heat engines are paradigmatic models to understand the conversion of into work in a noisy environment - domain where biological and synthetic nano/micro machines function. While operation these across thermal baths is well-understood, how they function with noise statistics that non-Gaussian also lacks memory, simplest departure from case, remains unclear. Here we quantified performance colloidal Stirling engine operating between an engineered memoryless bath Gaussian one....
Extracting information from radio-frequency (RF) signals using artificial neural networks at low energy cost is a critical need for wide range of applications radars to health. These RF inputs are composed multiple frequencies. Here, we show that magnetic tunnel junctions can process analog with frequencies in parallel and perform synaptic operations. Using backpropagation-free method called extreme learning, classify noisy images encoded by signals, experimental data functioning as both...
Fast and accurate online processing is essential for smooth prosthetic hand control with Surface Electromyography signals (sEMG). Although transformers are state-of-the-art deep learning models in signal processing, the self-attention mechanism at core of their operations requires accumulating data large time-windows. They therefore not suited processing. In this paper, we use an attention sliding windows that allows a transformer to process sequences element-by-element. Moreover, increase...
Memristive crossbar arrays are promising non-von Neumann computing technologies to enable real-world, online learning in neural networks. However, their deployment real-world problems is hindered by non-linearities conductance updates, variation during operation, fabrication mismatch and the realities of gradient descent training. In this work, we show that, with a phenomenological model device bi-level optimization, it possible pre-train network be largely insensitive such non-idealities on...
Analog Content Addressable Memories (aCAMs) have proven useful for associative in-memory computing applications like Decision Trees, Finite State Machines, and Hyper-dimensional Computing. While non-volatile implementations using FeFETs ReRAM devices offer speed, power, area advantages, they suffer from slow write speeds limited cycles, making them less suitable computations involving fully dynamic data patterns. To address these limitations, in this work, we propose a capacitor gain...
Transformer neural networks, driven by self-attention mechanisms, are core components of foundational and Large Language Models. In generative transformers, uses cache memory to store token projections, avoiding recomputation at each time step. However, GPU-stored projections must be loaded into SRAM for new generation step, causing latency energy bottlenecks long sequences. this work, we propose a fast energy-efficient hardware implementation using analog in-memory computing based on gain...
<title>Abstract</title> Transformer networks, driven by self-attention, are central to Large Language Models. In generative Transformers, self-attention uses cache memory store token projections, avoiding recomputation at each time step. However, GPU-stored projections must be loaded into SRAM for new generation step, causing latency and energy bottlenecks. We present a custom in-memory computing architecture based on emerging charge-based memories called gain cells, which can efficiently...
Spintronic nano-synapses and nano-neurons perform complex cognitive computations with high accuracy thanks to their rich, reproducible controllable magnetization dynamics. These dynamical nanodevices could transform artificial intelligence hardware, provided that they implement state-of-the art deep neural networks. However, there is today no scalable way connect them in multilayers. Here we show the flagship nano-components of spintronics, magnetic tunnel junctions, can be connected into...
For numerous Radio-Frequency applications such as medicine, RF fingerprinting or radar classification, it is important to be able apply Artificial Neural Network on signals. In this work we show that possible directly Multiply-And-Accumulate operations signals without digitalization, thanks Magnetic Tunnel Junctions (MTJs). These devices are similar the magnetic memories already industrialized and compatible with CMOS. We experimentally a chain of these MTJs can rectify simultaneously...
Transformers are state-of-the-art networks for most sequence processing tasks. However, the self-attention mechanism often used in requires large time windows each computation step and thus makes them less suitable online signal compared to Recurrent Neural Networks (RNNs). In this paper, instead of mechanism, we use a sliding window attention mechanism. We show that is more efficient continuous signals with finite-range dependencies between input target, can it process sequences...
Spintronic nano-synapses and nano-neurons are complex cognitive devices that have high accuracy due to their reproducible controllable magnetization dynamics. They the potential revolutionize artificial intelligence hardware, but currently, there is no scalable way connect them in multilayers. We show magnetic tunnel junctions can be used spintronics components multilayer neural networks, allowing function as both synapses neurons. build a two-layer hardware spintronic network using nine...
Extracting information from radiofrequency (RF) signals using artificial neural networks at low energy cost is a critical need for wide range of applications radars to health. These RF inputs are composed multiples frequencies. Here we show that magnetic tunnel junctions can process analogue with multiple frequencies in parallel and perform synaptic operations. Using backpropagation-free method called extreme learning, classify noisy images encoded by signals, experimental data functioning...