Brady Taylor

ORCID: 0000-0003-2032-0960
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • Ferroelectric and Negative Capacitance Devices
  • Neural Networks and Reservoir Computing
  • Neural dynamics and brain function
  • Photoreceptor and optogenetics research
  • CCD and CMOS Imaging Sensors
  • Semiconductor materials and devices
  • Neuroscience and Neural Engineering
  • Neural Networks and Applications
  • Machine Learning and ELM
  • Parallel Computing and Optimization Techniques
  • Machine Learning and Data Classification
  • Energy Harvesting in Wireless Networks
  • Molecular Communication and Nanonetworks
  • EEG and Brain-Computer Interfaces

Duke University
2020-2025

As the limits of transistor technology are approached, feature size in integrated circuit transistors has been reduced very near to minimum physically-realizable channel length, and it become increasingly difficult meet expectations outlined by Moore's law. one most promising devices replace transistors, memristors have many excellent properties that can be leveraged develop new types neural non-von Neumann computing systems, which expected revolutionize information-processing technology....

10.1109/tcsi.2022.3159153 article EN IEEE Transactions on Circuits and Systems I Regular Papers 2022-03-21

Pre-trained transformer models with extended context windows are notoriously expensive to run at scale, often limiting real-world deployment due their high computational and memory requirements. In this paper, we introduce Hamming Attention Distillation (HAD), a novel framework that binarizes keys queries in the attention mechanism achieve significant efficiency gains. By converting into {-1, +1} vectors replacing dot-product operations efficient distance computations, our method drastically...

10.48550/arxiv.2502.01770 preprint EN arXiv (Cornell University) 2025-02-03

Artificial intelligence (AI) provides versatile capabilities in applications such as image classification and voice recognition that are most useful edge or mobile computing settings. Shrinking these sophisticated algorithms into small form factors with minimal resources power budgets requires innovation at several layers of abstraction: software, algorithmic, architectural, circuit, device-level innovations. However, improvements to system efficiency may impact robustness vice-versa....

10.1145/3724396 article EN cc-by ACM Transactions on Embedded Computing Systems 2025-03-25

Neuromorphic computing and spiking neural networks (SNN) mimic the behavior of biological systems have drawn interest for their potential to perform cognitive tasks with high energy efficiency. However, some factors such as temporal dynamics spike timings prove critical information processing but are often ignored by existing works, limiting performance applications neuromorphic computing. On one hand, due lack effective SNN training algorithms, it is difficult utilize dynamics. Many...

10.1109/dac18074.2021.9586133 article EN 2021-11-08

Due to the need for computing models that can process large quantities of data efficiently and with high throughput in many state-of-the-art machine learning algorithms, processing-in-memory (PIM) paradigm is emerging as a potential replacement standard digital architectures on these workloads. In this tutorial, we review progress PIM technology recent years, at both circuit architecture level. We further present an analysis when how surpasses performance conventional architectures. Finally,...

10.1109/tcsii.2022.3168404 article EN publisher-specific-oa IEEE Transactions on Circuits & Systems II Express Briefs 2022-04-19

Movement of model parameters from memory to computing elements in deep learning (DL) has led a growing imbalance known as the wall. Neuromorphic computation-in-memory (CIM) is an emerging paradigm that addresses this by performing computations directly analog memory. However, sequential backpropagation error through network DL prevents efficient parallelization. A novel method, direct feedback alignment (DFA), resolves layer dependencies passing output each layer. This work explores...

10.1109/mwscas57524.2023.10405905 article EN 2022 IEEE 65th International Midwest Symposium on Circuits and Systems (MWSCAS) 2023-08-06

Equilibrium propagation (EqProp) and its adaptations for spiking neural networks (SNN) are presented as biologically plausible alternatives to back-propagation (BP) which describe a potential low-energy means of learning complex tasks in neuromorphic hardware. These algorithms conducive extremely efficient analog computing approaches, but detailed circuit implementation architectural outline have not yet been presented. Furthermore, current theoretical designs EqProp addressed synapse...

10.1109/aicas54282.2022.9869989 article EN 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2022-06-13

Despite significant advancements in AI driven by neural networks, tree-based machine learning (TBML) models excel on tabular data. These exhibit promising energy efficiency, and high performance, particularly when accelerated analog content-addressable memory (aCAM) arrays. However, optimizing their hardware deployment, especially leveraging TBML model structure aCAM circuitry, remains challenging. In this paper, we introduce MonoSparse-CAM, a novel (CAM) based computing optimization...

10.48550/arxiv.2407.11071 preprint EN arXiv (Cornell University) 2024-07-12

The renaissance of artificial intelligence highlights the tremendous need for computational power as well higher computing efficiency in both high performance and embedded applications. [1] To meet this demand, neuromorphic systems (NCS) that are inspired by biological neural circumvent von Neumann bottleneck integrating computation memory same place with reduced data traffic. NCS can be efficiently implemented using emerging nonvolatile memories such memristor crossbar arrays, which provide...

10.1109/vlsi-tsa48913.2020.9203744 article EN 2020-08-01

Spike-timing-dependent plasticity (STDP) is emerging as a simple and biologically-plausible approach to learning, specialized digital implementations are readily available. Memristor technology has been embraced much denser solution than static random-access memory (SRAM) of STDP synapses, with capabilities built into the physics these devices. One-selector-one-memristor (1S1R) arrays using volatile memristor devices selectors capable desired synaptic behavior efficient spike-events, but...

10.1109/iscas51556.2021.9401644 article EN 2022 IEEE International Symposium on Circuits and Systems (ISCAS) 2021-04-27

Increased interest in artificial intelligence coupled with a surge nonvolatile memory research and the inevitable hitting of "memory wall" von Neuman computing1 has set stage for new flavor computing systems to flourish: neuromorphic systems. These are modelled after brain hopes achieving comparable level efficiency terms speed, power, performance, size. As it becomes more apparent that digital implementations far from approaching brain's efficiency, we look memories answers. In this paper,...

10.1117/12.2554915 article EN 2020-03-23

With an ever-growing number of parameters defining increasingly complex networks, Deep Learning has led to several breakthroughs surpassing human performance. As a result, data movement for these millions model causes growing imbalance known as the memory wall. Neuromorphic computing is emerging paradigm that confronts this by performing computations directly in analog memories. On software side, sequential Backpropagation algorithm prevents efficient parallelization and thus fast...

10.48550/arxiv.2212.14337 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Neuromorphic computing and spiking neural networks (SNN) mimic the behavior of biological systems have drawn interest for their potential to perform cognitive tasks with high energy efficiency. However, some factors such as temporal dynamics spike timings prove critical information processing but are often ignored by existing works, limiting performance applications neuromorphic computing. On one hand, due lack effective SNN training algorithms, it is difficult utilize dynamics. Many...

10.48550/arxiv.2104.10712 preprint EN other-oa arXiv (Cornell University) 2021-01-01
Coming Soon ...