Abhishek Moitra

ORCID: 0000-0002-0534-5206
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • Ferroelectric and Negative Capacitance Devices
  • Neural dynamics and brain function
  • Adversarial Robustness in Machine Learning
  • Advanced Neural Network Applications
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Domain Adaptation and Few-Shot Learning
  • Integrated Circuits and Semiconductor Failure Analysis
  • Digital Filter Design and Implementation
  • Stochastic Gradient Optimization Techniques
  • Anomaly Detection Techniques and Applications
  • Tensor decomposition and applications
  • Neural Networks and Applications
  • Neural Networks and Reservoir Computing
  • Parallel Computing and Optimization Techniques
  • Semiconductor materials and devices
  • Thin-Film Transistor Technologies
  • Advanced Electrical Measurement Techniques
  • Hypothalamic control of reproductive hormones
  • Neuroscience and Neural Engineering
  • CCD and CMOS Imaging Sensors
  • Retinal Imaging and Analysis
  • Machine Learning and ELM
  • Ovarian function and disorders
  • Numerical Methods and Algorithms

Yale University
2021-2024

Birla Institute of Technology and Science, Pilani
2018-2020

Birla Institute of Technology and Science, Pilani - Goa Campus
2019

Recent Spiking Neural Networks (SNNs) works focus on an image classification task, therefore various coding techniques have been proposed to convert into temporal binary spikes. Among them, rate and direct are regarded as prospective candidates for building a practical SNN system they show state-of-the-art performance large-scale datasets. Despite their usage, there is little attention comparing these two schemes in fair manner. In this paper, we conduct comprehensive analysis of the codings...

10.1109/icassp43922.2022.9747906 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022-04-27

Spiking neural networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional artificial (ANNs) due their inherent high-sparsity activation. Recently, SNNs with backpropagation through time (BPTT) achieved higher accuracy result on image recognition tasks than other SNN training algorithms. Despite the success from algorithm perspective, prior works neglect evaluation of hardware energy overheads BPTT, lack platform for this algorithm. Moreover,...

10.1109/tcad.2022.3213211 article EN IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 2022-10-10

Spiking neural networks (SNNs) are an active research domain toward energy-efficient machine intelligence. Compared to conventional artificial (ANNs), SNNs use temporal spike data and bio-plausible neuronal activation functions such as leaky-integrate fire/integrate fire (LIF/IF) for processing. However, incur significant dot-product operations causing high memory computation overhead in standard von-Neumann computing platforms. To this end, in-memory (IMC) architectures have been proposed...

10.1109/tcad.2023.3274918 article EN IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 2023-05-11

Pruning for Spiking Neural Networks (SNNs) has emerged as a fundamental methodology deploying deep SNNs on resource-constrained edge devices. Though the existing pruning methods can provide extremely high weight sparsity SNNs, brings workload imbalance problem. Specifically, happens when different number of non-zero weights are assigned to hardware units running in parallel. This results low utilization and thus imposes longer latency higher energy costs. In preliminary experiments, we show...

10.1109/tetci.2024.3393367 article EN IEEE Transactions on Emerging Topics in Computational Intelligence 2024-05-06

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory store a membrane voltage capture the temporal dynamics of spikes. Although required cost for LIF neurons significantly increases input dimension goes larger, technique reduce has not been explored so far. To address this, we propose...

10.3389/fnins.2023.1230002 article EN cc-by Frontiers in Neuroscience 2023-07-31

Spiking Neural Networks (SNNs) have recently attracted widespread research interest as an efficient alternative to traditional Artificial (ANNs) because of their capability process sparse and binary spike information avoid expensive multiplication operations. Although the efficiency SNNs can be realized on In-Memory Computing (IMC) architecture, we show that energy cost latency scale linearly with number timesteps used IMC hardware. Therefore, in order maximize SNNs, propose input-aware...

10.1109/dac56929.2023.10247869 article EN 2023-07-09

We propose Multiplier-less INTeger (MINT) quantization, a uniform quantization scheme that efficiently compresses weights and membrane potentials in spiking neural networks (SNNs). Unlike previous SNN methods, MINT quantizes memory-intensive to an extremely low precision (2-bit), significantly reducing the memory footprint. also shares scaling factor between potentials, eliminating need for multipliers required conventional quantization. Experimental results show our method matches accuracy...

10.1109/asp-dac58780.2024.10473825 article EN 2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC) 2024-01-22

10.1109/tcad.2024.3435762 article EN IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 2024-01-01

As neural networks gain widespread adoption in embedded devices, there is a growing need for model compression techniques to facilitate seamless deployment resource-constrained environments. Quantization one of the go-to methods yielding state-of-the-art compression. Most quantization approaches take fully trained model, then apply different heuristics determine optimal bit-precision layers network, and finally retrain network regain any drop accuracy. Based on Activation Density-the...

10.23919/date51398.2021.9474031 article EN Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015 2021-02-01

Spiking Neural Networks (SNNs) have recently emerged as the low-power alternative to Artificial (ANNs) owing their asynchronous, sparse, and binary information processing. To improve energy-efficiency throughput, SNNs can be implemented on memristive crossbars where Multiply-and-Accumulate (MAC) operations are realized in analog domain using emerging Non-Volatile-Memory (NVM) devices. Despite compatibility of with crossbars, there is little attention study effect intrinsic crossbar...

10.1145/3531437.3539729 preprint EN 2022-07-16

In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies. However, intrinsic non-idealities in crossbars, which often non-deterministic and non-linear, degrade performance deployed DNNs. In addition to quantization errors, most frequently encountered during inference include crossbar circuit-level parasitic resistances device-level stochastic read noise...

10.1145/3583781.3590241 article EN Proceedings of the Great Lakes Symposium on VLSI 2022 2023-05-31

Today, there are a plethora of In-Memory Computing (IMC) devices- SRAMs, PCMs & FeFETs, that emulate convolutions on crossbar-arrays with high throughput. Each IMC device offers its own pros cons during inference Deep Neural Networks (DNNs) crossbars in terms area overhead, programming energy and non-idealities. A design-space exploration is, therefore, imperative to derive hybrid-device architecture optimized for accurate DNN under the impact non-idealities from multiple devices, while...

10.1109/jetcas.2023.3327748 article EN IEEE Journal on Emerging and Selected Topics in Circuits and Systems 2023-10-26

Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities, utilizing bio-inspired activation functions and sparse binary spike-data representations. While recent SNN algorithmic advances achieve high accuracy on large-scale computer vision tasks, energy-efficiency claims rely certain impractical estimation metrics. This work studies two hardware benchmarking platforms inference, namely SATA SpikeSim. is a sparsity-aware systolic-array...

10.1109/icassp48485.2024.10448269 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024-03-18

The attention module in vision transformers(ViTs) performs intricate spatial correlations, contributing significantly to accuracy and delay.It is thereby important modulate the number of attentions according input feature complexity for optimal delay-accuracy tradeoffs.To this end, we propose PIVOT -a cooptimization framework which selectively skipping based on difficulty.For this, employs a hardwarein-loop co-search obtain skip configurations.Evaluations ZCU102 MPSoC FPGA show that achieves...

10.1145/3649329.3655679 preprint EN 2024-06-23

Neural networks have achieved remarkable performance in computer vision, however they are vulnerable to adversarial examples. Adversarial examples inputs that been carefully perturbed fool classifier networks, while appearing unchanged humans. Based on prior works detecting adversaries, we propose a structured methodology of augmenting deep neural network (DNN) with detector subnetwork. We use $\textit{Adversarial Noise Sensitivity}$ (ANS), novel metric for measuring the gradient...

10.1109/tcad.2021.3091436 article EN IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 2021-06-22

With a growing need to enable intelligence in embedded devices the Internet of Things (IoT) era, secure hardware implementation Deep Neural Networks (DNNs) has become imperative.We will focus on how address adversarial robustness for DNNs through efficiency-driven optimizations.Since memory (specifically, dot-product operations) is key energyspending component DNNs, approaches past have focused optimizing memory.One such approach approximate digital CMOS memories with hybrid 6T-8T SRAM cells...

10.23919/date51398.2021.9474001 article EN Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015 2021-02-01

Recent years have seen a paradigm shift towards multi-task learning. This calls for memory and energy-efficient solutions inference in scenario. We propose an algorithm-hardware co-design approach called MIME. MIME reuses the weight parameters of trained parent task learns task-specific threshold on multiple child tasks. find that results highly memory-efficient DRAM storage neural-network tasks compared to conventional inference. In addition, input-dependent dynamic neuronal pruning,...

10.1145/3489517.3530473 article EN Proceedings of the 59th ACM/IEEE Design Automation Conference 2022-07-10

The hardware-efficiency and accuracy of Deep Neural Networks (DNNs) implemented on In-memory Computing (IMC) architectures primarily depend the DNN architecture peripheral circuit parameters. It is therefore essential to holistically co-search network parameters achieve optimal performance. To this end, we propose XPert, which co-searches in tandem with such as type precision analog-to-digital converters, crossbar column sharing layer-specific input using an optimization-based design space...

10.1109/dac56929.2023.10247676 article EN 2023-07-09

Abstract The gonadotropin‐releasing hormone—gonadotropin inhibitor (GnRH–GnIH) system in the hypothalamus of mammals is key factor that controls entire reproductive system. aim this study was to immunolocalize GnIH (RFRP‐3) during estrous cycle and effect putrescine on expression GnRH‐I through both vivo vitro (GT1‐7 cells) approach circulatory levels GnRH‐I, GnIH, gonadotropins were also investigated. aims analyzing all immunofluorescence images by measuring relative pixel count an image....

10.1002/jez.2351 article EN Journal of Experimental Zoology Part A Ecological and Integrative Physiology 2020-02-10

Adversarial input detection has emerged as a prominent technique to harden Deep Neural Networks(DNNs) against adversarial attacks. Most prior works use neural network-based detectors or complex statistical analysis for detection. These approaches are computationally intensive and vulnerable To this end, we propose DetectX - hardware friendly mechanism using signatures like Sum of column Currents (SoI) in memristive crossbars (XBar). We show that inputs have higher SoI compared clean inputs....

10.1109/tcsi.2021.3110487 article EN publisher-specific-oa IEEE Transactions on Circuits and Systems I Regular Papers 2021-09-16

Interpolation is widely used for a number of applications. For instance, it to generate geometric models, path trajectories and various applications signal processing including data compression. Traditionally, software implementation interpolation carried out using General Purpose Processor (GPP) or Digital Signal (DSP). Field Programmable Gate Arrays (FPGAs) however, offer more customizable faster solution. This paper presents hardware accelerator architecture Hermite on FPGA. Cubic...

10.1109/dasip.2018.8596920 article EN 2018-10-01
Coming Soon ...