- Stochastic Gradient Optimization Techniques
- Low-power high-performance VLSI design
- Generative Adversarial Networks and Image Synthesis
- Neural Networks and Applications
- Quantum Computing Algorithms and Architecture
- Error Correcting Code Techniques
- Cellular Automata and Applications
- Evolutionary Algorithms and Applications
- Ferroelectric and Negative Capacitance Devices
- Parallel Computing and Optimization Techniques
- Domain Adaptation and Few-Shot Learning
- Machine Learning and Data Classification
- Advanced Memory and Neural Computing
- Computability, Logic, AI Algorithms
University of California, Santa Barbara
2023-2024
The transistor celebrated its 75 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">th</sup> birthday in 2022. continued scaling of the defined by Moore's Law continues, albeit at a slower pace. Meanwhile, computing demands and energy consumption required modern artificial intelligence (AI) algorithms have skyrocketed. As an alternative to transistors for general-purpose computing, integration with unconventional technologies has emerged as...
Abstract Extending Moore’s law by augmenting complementary-metal-oxide semiconductor (CMOS) transistors with emerging nanotechnologies (X) has become increasingly important. One important class of problems involve sampling-based Monte Carlo algorithms used in probabilistic machine learning, optimization, and quantum simulation. Here, we combine stochastic magnetic tunnel junction (sMTJ)-based bits (p-bits) Field Programmable Gate Arrays (FPGA) to create an energy-efficient CMOS + X (X =...
Extending Moore's law by augmenting complementary-metal-oxide semiconductor (CMOS) transistors with emerging nanotechnologies (X) has become increasingly important. One important class of problems involve sampling-based Monte Carlo algorithms used in probabilistic machine learning, optimization, and quantum simulation. Here, we combine stochastic magnetic tunnel junction (sMTJ)-based bits (p-bits) Field Programmable Gate Arrays (FPGA) to create an energy-efficient CMOS + X (X = sMTJ)...
Feedforward networks form the backbone of deep learning, used in multilayer perceptrons, convolutional neural networks, and belief networks. Even though stochastic activations are highly desired, they often avoided due to their heavy computational costs traditional hardware. This paper presents hardware implementation inference such feedforward with fastest nanodevice-based probabilistic bits (p-bit) demonstrated date. The stochasticity low-barrier magnetic tunnel junctions (sMTJ) is create...
Despite their appeal as physics-inspired, energy-based and generative nature, general Boltzmann Machines (BM) are considered intractable to train. This belief led simplified models of BMs with restricted intralayer connections or layer-by-layer training deep BMs. Recent developments in domain-specific hardware -- specifically probabilistic computers (p-computer) bits (p-bit) may change established wisdom on the tractability In this paper, we show that unrestricted can be trained using...
The slowing down of Moore's law has driven the development unconventional computing paradigms, such as specialized Ising machines tailored to solve combinatorial optimization problems. In this paper, we show a new application domain for probabilistic bit (p-bit) based by training deep generative AI models with them. Using sparse, asynchronous, and massively parallel train Boltzmann networks in hybrid probabilistic-classical setup. We use full MNIST Fashion (FMNIST) dataset without any...
The transistor celebrated its 75${}^\text{th}$ birthday in 2022. continued scaling of the defined by Moore's Law continues, albeit at a slower pace. Meanwhile, computing demands and energy consumption required modern artificial intelligence (AI) algorithms have skyrocketed. As an alternative to transistors for general-purpose computing, integration with unconventional technologies has emerged as promising path domain-specific computing. In this article, we provide full-stack review...