- Advanced Memory and Neural Computing
- CCD and CMOS Imaging Sensors
- Machine Learning and ELM
- Advanced Neural Network Applications
- Radiation Detection and Scintillator Technologies
- Ferroelectric and Negative Capacitance Devices
- Machine Fault Diagnosis Techniques
- ECG Monitoring and Analysis
- Adversarial Robustness in Machine Learning
- Autonomous Vehicle Technology and Safety
- Optical Systems and Laser Technology
- Particle Detector Development and Performance
- High-Voltage Power Transmission Systems
- Advanced Data Compression Techniques
- Neural dynamics and brain function
- Engineering Diagnostics and Reliability
- Neural Networks and Applications
- Mechanical stress and fatigue analysis
- Infrared Target Detection Methodologies
- Stochastic Gradient Optimization Techniques
- Industrial Engineering and Technologies
- Anomaly Detection Techniques and Applications
- Geoscience and Mining Technology
- Advanced Measurement and Detection Methods
- Smart Grid and Power Systems
University of Notre Dame
2023-2025
Taiyuan University of Technology
2024
State Grid Corporation of China (China)
2023
Wuhan National Laboratory for Optoelectronics
2020-2022
Huazhong University of Science and Technology
2020-2022
Xidian University
2022
This paper proposes an ultra-low power, mixed-bit-width sparse convolutional neural network (CNN) accelerator to accelerate ventricular arrhythmia (VA) detection. The chip achieves 50% sparsity in a quantized 1D CNN using processing element (SPE) architecture. Measurement on the prototype TSMC 40nm CMOS low-power (LP) process for VA classification task demonstrates that it consumes 10.60 $\mu$W of power while achieving performance 150 GOPS and diagnostic accuracy 99.95%. computation density...
In-memory computing based on memristor is a promising solution to accelerate on-chip deep neural networks. Concerning the nonideal factors of device analog behaviors, binary network (BNN) with ±1 weight and 0/+1 neuron an alternative route better employ memristors high technical maturity. In this article, we demonstrate select column scheme for BNN inference accelerator. By incorporating performance W/AlOx/Al <sub xmlns:mml="http://www.w3.org/1998/Math/MathML"...
Aiming at the problem of difficult fault diagnosis work caused by difficulty data acquisition bearing in traction part a coal mining machine, method ADAMS simulation and HHT feature extraction machine is proposed. First all, take section as research object, use virtual prototype establishment healthy state model based on inner ring fault, rolling body outer dynamics model, then after EMD decomposition, each IMF component Hilbert transform, to obtain signal time-frequency plane joint...
Compute-in-Memory (CiM), built upon non-volatile memory (NVM) devices, is promising for accelerating deep neural networks (DNNs) owing to its in-situ data processing capability and superior energy efficiency. To battle device variations, noise injection training commonly used, which perturbs weights with Gaussian during make the model more robust weight variations. Despite prevalence, however, existing successes are mostly empirical, very little theoretical support available. Even most...
Recently, due to the development of big data and computer technology, artificial intelligence (AI) has received extensive attention made great progress. Edge pushes computing center AI from cloud individual users, making closer life, but at same time puts forward higher requirements for realization hardware, especially edge acceleration. Taking convolutional neural networks (CNNs) as an example, which show excellent problem‐solving capabilities in different fields academia industry, it still...
It remains a challenge to run Deep Learning in devices with stringent power budget the Internet-of-Things. This paper presents low-power accelerator for processing Neural Networks embedded devices. The reduction is realized by avoiding multiplications of near-zero valued data. approximation and dedicated Near-Zero Approximation Unit (NZAU) are proposed predict skip under certain thresholds. Compared skipping zero-valued computations, our design achieves 1.92X 1.51X further total LeNet-5...
Edge Intelligence In article number 2000114, Yi Li, Xiang-Shui Miao, and co-workers review the recent advances in memristive convolutional neural network (CNN) accelerators for hardware realization of edge intelligence. The compression methods combination with long short term memory (LSTM) show great potential specific domain applications. Insight current challenges outlook intelligence memristor driven CNN are summarized.
Hydraulic support leg pressure serves as a crucial indicator for assessing work face quality. Current evaluation methods quality primarily concentrate on static analyses-like inadequate initial force, overrun, and uneven bracket force-while neglecting dynamic column changes. This paper introduces model hydraulic using deep learning techniques. Real-time data is preprocessed into spatio-temporal sub-matrix sample, which then inputted the model. process assesses type characterizes its...
Compute-in-memory (CIM) accelerators using non-volatile memory (NVM) devices offer promising solutions for energy-efficient and low-latency Deep Neural Network (DNN) inference execution. However, practical deployment is often hindered by the challenge of dealing with massive amount model weight parameters impacted inherent device variations within computing-in-memory (NVCIM) accelerators. This issue significantly offsets their advantages increasing training overhead, time needed mapping...
Compute-in-memory (CIM) accelerators built upon non-volatile memory (NVM) devices excel in energy efficiency and latency when performing Deep Neural Network (DNN) inference, thanks to their in-situ data processing capability. However, the stochastic nature intrinsic variations of NVM often result performance degradation DNN inference. Introducing these non-ideal device behaviors during training enhances robustness, but drawbacks include limited accuracy improvement, reduced prediction...
Nonvolatile memory (NVM)-based convolutional neural networks (NvCNNs) have received widespread attention as a promising solution for hardware edge intelligence. However, there still exist many challenges in the resource-constrained conditions, such limitations of precision and cost and, especially, large overhead analog-to-digital converters (ADCs). In this study, we systematically analyze performance NvCNNs restrictions with quantization both weight activation propose corresponding...
Compute-in-Memory (CiM), built upon non-volatile memory (NVM) devices, is promising for accelerating deep neural networks (DNNs) owing to its in-situ data processing capability and superior energy efficiency. Unfortunately, the well-trained model parameters, after being mapped NVM can often exhibit large deviations from their intended values due device variations, resulting in notable performance degradation these CiM-based DNN accelerators. There exists a long list of solutions address this...
The main branch of 500kV high-voltage DC circuit-breaker (HVDC CB) adopts a series structure multiple fast mechanical switches, which requires an effective distance within 2ms and can withstand over-voltage. Therefore, higher requirements on the dynamic voltage balancing synchronization characteristics are put forward. In this paper, action HVDC CB stress switches in analyzed. A model is built, distributed capacitance parameters extracted. equalization measures multi-switch studied. It...