- Advanced Memory and Neural Computing
- Ferroelectric and Negative Capacitance Devices
- Neural Networks and Applications
- Semiconductor materials and devices
- Quantum Computing Algorithms and Architecture
- Neural Networks and Reservoir Computing
- Photoreceptor and optogenetics research
- Quantum-Dot Cellular Automata
- Advanced Algorithms and Applications
- Fault Detection and Control Systems
Peking University
2023-2024
Combinatorial optimization (CO) is essential in various real-world decision-making and planning problems. However, most CO problems (COPs) are NP-hard, demanding substantial computational resources with conventional computers. The Ising machine promising addressing them efficiently thanks to its natural convergence behavior. Recently, CMOS-based machines have been developed as low-cost COP solvers, which can be categorized discrete-time (DT) [1–4] continuous-time (CT) ones [5–7]. Benefiting...
Computing-in-memory (CIM) provides a highly efficient solution for neural networks in edge artificial intelligence applications. Most SRAM-based CIM designs can achieve high energy efficiency and area standard convolutional layer's multiply-and-accumulate (MAC) operations. However, when deploying depthwise separable convolution, they face several challenges. For these CIMs with weight-stationary, the lower activations reuse increases redundant memory reducing fewer parameters of MAC...
Depthwise separable neural network models with fewer parameters, such as MobileNet, are more friendly to edge AI devices. They replace the standard convolution depthwise convolution, which consists of a (DW) and pointwise (PW) convolution. Most prior computing-in-memory (CIM) works [1–5] only optimize multiply-and-accumulate (MAC) operations for one these two types. Thus, when performing recent SRAM-based CIMs still face limitations in energy efficiency, throughput, memory utilization (Fig....
Spiking neural network (SNN) is a promising generation of networks inspired by biology, which has the advantages high energy efficiency and hardware-friendly. In SNN, input not encoded as real-valued activations but sequences binary spikes. The weights are usually stored in form crossbars. Due to unstructured pruning compression model training, SNN exhibit spatial sparsity. Recently, many matrix algorithms have been applied on-chip weight storage accelerators. However, when non-zero values...