- Advanced Memory and Neural Computing
- Ferroelectric and Negative Capacitance Devices
- Neural Networks and Reservoir Computing
- Neural dynamics and brain function
- Photoreceptor and optogenetics research
- CCD and CMOS Imaging Sensors
- Semiconductor materials and devices
- Neuroscience and Neural Engineering
- Neural Networks and Applications
- Machine Learning and ELM
- Parallel Computing and Optimization Techniques
- Machine Learning and Data Classification
- Energy Harvesting in Wireless Networks
- Molecular Communication and Nanonetworks
- EEG and Brain-Computer Interfaces
Duke University
2020-2025
As the limits of transistor technology are approached, feature size in integrated circuit transistors has been reduced very near to minimum physically-realizable channel length, and it become increasingly difficult meet expectations outlined by Moore's law. one most promising devices replace transistors, memristors have many excellent properties that can be leveraged develop new types neural non-von Neumann computing systems, which expected revolutionize information-processing technology....
Pre-trained transformer models with extended context windows are notoriously expensive to run at scale, often limiting real-world deployment due their high computational and memory requirements. In this paper, we introduce Hamming Attention Distillation (HAD), a novel framework that binarizes keys queries in the attention mechanism achieve significant efficiency gains. By converting into {-1, +1} vectors replacing dot-product operations efficient distance computations, our method drastically...
Artificial intelligence (AI) provides versatile capabilities in applications such as image classification and voice recognition that are most useful edge or mobile computing settings. Shrinking these sophisticated algorithms into small form factors with minimal resources power budgets requires innovation at several layers of abstraction: software, algorithmic, architectural, circuit, device-level innovations. However, improvements to system efficiency may impact robustness vice-versa....
Neuromorphic computing and spiking neural networks (SNN) mimic the behavior of biological systems have drawn interest for their potential to perform cognitive tasks with high energy efficiency. However, some factors such as temporal dynamics spike timings prove critical information processing but are often ignored by existing works, limiting performance applications neuromorphic computing. On one hand, due lack effective SNN training algorithms, it is difficult utilize dynamics. Many...
Due to the need for computing models that can process large quantities of data efficiently and with high throughput in many state-of-the-art machine learning algorithms, processing-in-memory (PIM) paradigm is emerging as a potential replacement standard digital architectures on these workloads. In this tutorial, we review progress PIM technology recent years, at both circuit architecture level. We further present an analysis when how surpasses performance conventional architectures. Finally,...
Movement of model parameters from memory to computing elements in deep learning (DL) has led a growing imbalance known as the wall. Neuromorphic computation-in-memory (CIM) is an emerging paradigm that addresses this by performing computations directly analog memory. However, sequential backpropagation error through network DL prevents efficient parallelization. A novel method, direct feedback alignment (DFA), resolves layer dependencies passing output each layer. This work explores...
Equilibrium propagation (EqProp) and its adaptations for spiking neural networks (SNN) are presented as biologically plausible alternatives to back-propagation (BP) which describe a potential low-energy means of learning complex tasks in neuromorphic hardware. These algorithms conducive extremely efficient analog computing approaches, but detailed circuit implementation architectural outline have not yet been presented. Furthermore, current theoretical designs EqProp addressed synapse...
Despite significant advancements in AI driven by neural networks, tree-based machine learning (TBML) models excel on tabular data. These exhibit promising energy efficiency, and high performance, particularly when accelerated analog content-addressable memory (aCAM) arrays. However, optimizing their hardware deployment, especially leveraging TBML model structure aCAM circuitry, remains challenging. In this paper, we introduce MonoSparse-CAM, a novel (CAM) based computing optimization...
The renaissance of artificial intelligence highlights the tremendous need for computational power as well higher computing efficiency in both high performance and embedded applications. [1] To meet this demand, neuromorphic systems (NCS) that are inspired by biological neural circumvent von Neumann bottleneck integrating computation memory same place with reduced data traffic. NCS can be efficiently implemented using emerging nonvolatile memories such memristor crossbar arrays, which provide...
Spike-timing-dependent plasticity (STDP) is emerging as a simple and biologically-plausible approach to learning, specialized digital implementations are readily available. Memristor technology has been embraced much denser solution than static random-access memory (SRAM) of STDP synapses, with capabilities built into the physics these devices. One-selector-one-memristor (1S1R) arrays using volatile memristor devices selectors capable desired synaptic behavior efficient spike-events, but...
Increased interest in artificial intelligence coupled with a surge nonvolatile memory research and the inevitable hitting of "memory wall" von Neuman computing1 has set stage for new flavor computing systems to flourish: neuromorphic systems. These are modelled after brain hopes achieving comparable level efficiency terms speed, power, performance, size. As it becomes more apparent that digital implementations far from approaching brain's efficiency, we look memories answers. In this paper,...
With an ever-growing number of parameters defining increasingly complex networks, Deep Learning has led to several breakthroughs surpassing human performance. As a result, data movement for these millions model causes growing imbalance known as the memory wall. Neuromorphic computing is emerging paradigm that confronts this by performing computations directly in analog memories. On software side, sequential Backpropagation algorithm prevents efficient parallelization and thus fast...
Neuromorphic computing and spiking neural networks (SNN) mimic the behavior of biological systems have drawn interest for their potential to perform cognitive tasks with high energy efficiency. However, some factors such as temporal dynamics spike timings prove critical information processing but are often ignored by existing works, limiting performance applications neuromorphic computing. On one hand, due lack effective SNN training algorithms, it is difficult utilize dynamics. Many...