Youming Yang

ORCID: 0009-0009-0593-4842
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • Ferroelectric and Negative Capacitance Devices
  • Neural Networks and Applications
  • Semiconductor materials and devices
  • Quantum Computing Algorithms and Architecture
  • Neural Networks and Reservoir Computing
  • Photoreceptor and optogenetics research
  • Quantum-Dot Cellular Automata
  • Advanced Algorithms and Applications
  • Fault Detection and Control Systems

Peking University
2023-2024

Combinatorial optimization (CO) is essential in various real-world decision-making and planning problems. However, most CO problems (COPs) are NP-hard, demanding substantial computational resources with conventional computers. The Ising machine promising addressing them efficiently thanks to its natural convergence behavior. Recently, CMOS-based machines have been developed as low-cost COP solvers, which can be categorized discrete-time (DT) [1–4] continuous-time (CT) ones [5–7]. Benefiting...

10.1109/isscc49657.2024.10454272 article EN 2022 IEEE International Solid- State Circuits Conference (ISSCC) 2024-02-18

Computing-in-memory (CIM) provides a highly efficient solution for neural networks in edge artificial intelligence applications. Most SRAM-based CIM designs can achieve high energy efficiency and area standard convolutional layer's multiply-and-accumulate (MAC) operations. However, when deploying depthwise separable convolution, they face several challenges. For these CIMs with weight-stationary, the lower activations reuse increases redundant memory reducing fewer parameters of MAC...

10.1109/tcsii.2024.3375319 article EN IEEE Transactions on Circuits & Systems II Express Briefs 2024-03-14

Depthwise separable neural network models with fewer parameters, such as MobileNet, are more friendly to edge AI devices. They replace the standard convolution depthwise convolution, which consists of a (DW) and pointwise (PW) convolution. Most prior computing-in-memory (CIM) works [1–5] only optimize multiply-and-accumulate (MAC) operations for one these two types. Thus, when performing recent SRAM-based CIMs still face limitations in energy efficiency, throughput, memory utilization (Fig....

10.1109/cicc60959.2024.10529086 article EN 2022 IEEE Custom Integrated Circuits Conference (CICC) 2024-04-21

Spiking neural network (SNN) is a promising generation of networks inspired by biology, which has the advantages high energy efficiency and hardware-friendly. In SNN, input not encoded as real-valued activations but sequences binary spikes. The weights are usually stored in form crossbars. Due to unstructured pruning compression model training, SNN exhibit spatial sparsity. Recently, many matrix algorithms have been applied on-chip weight storage accelerators. However, when non-zero values...

10.1109/biocas58349.2023.10388553 article EN 2022 IEEE Biomedical Circuits and Systems Conference (BioCAS) 2023-10-19
Coming Soon ...