Hung-Hsi Hsu

ORCID: 0009-0007-3511-5333
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • Ferroelectric and Negative Capacitance Devices
  • Semiconductor materials and devices
  • Neuroscience and Neural Engineering
  • CCD and CMOS Imaging Sensors
  • Video Coding and Compression Technologies
  • Advanced Data Compression Techniques
  • Image and Video Quality Assessment

National Tsing Hua University
2023-2025

Taiwan Semiconductor Manufacturing Company (China)
2025

Taiwan Semiconductor Manufacturing Company (Taiwan)
2024

MediaTek (China)
2015

Abstract Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units operating in parallel. The low communication bandwidth between memory and processing conventional von Neumann machines does not support the requirements emerging applications that extensively large sets data. More recent paradigms, such as high parallelization near-memory computing, help alleviate data bottleneck to some...

10.1038/s41467-024-45670-9 article EN cc-by Nature Communications 2024-03-04

Artificial intelligence (AI) edge devices prefer employing high-capacity nonvolatile compute-in-memory (CIM) to achieve high energy efficiency and rapid wakeup-to-response with sufficient accuracy. Most previous works are based on either memristor-based CIMs, which suffer from accuracy loss do not support training as a result of limited endurance, or digital static random-access memory (SRAM)-based large area requirements volatile storage. We report an AI processor that uses memristor-SRAM...

10.1126/science.adf5538 article EN Science 2024-04-18

AI-edge devices demand high-precision computation (e.g. FP16 and BF16) for accurate inference in practical applications, while maintaining high energy efficiency (EF) low standby power to prolong battery life. Thus, advanced non-volatile processors [1, 2] require compute-in-memory (nvCIM) [3–5] with a large on-chip memory, store all of the neural network's parameters (weight data) during power-off, high-EF multiply-and-accumulate (MAC) operations compute, maximize Among nvCIMs, ReRAM-nvCIM...

10.1109/isscc49657.2024.10454468 article EN 2022 IEEE International Solid- State Circuits Conference (ISSCC) 2024-02-18

Tiny AI edge processors prefer using nvCIM to achieve low standby power, high energy efficiency (EF), and short wakeupto-response latency (T <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">WR</inf> ). Most nvCIMs use in-memory computing for MAC operations; however, this imposes a tradeoff between EF accuracy, due accumulationnumber (N xmlns:xlink="http://www.w3.org/1999/xlink">ACU</inf> ) versus signal margin readout quantization. To we...

10.23919/vlsitechnologyandcir57934.2023.10185326 article EN 2022 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits) 2023-06-11

On-chip non-volatile compute-in-memory (nvCIM) enables artificial intelligence (AI)-edge processors to perform multiply-and-accumulate (MAC) operations while enabling the storage of weight data in power-off mode enhance energy efficiency. However, design challenges nvCIM-based AI-edge include: 1) lack a nvCIM-friendly computing flow; 2) tradeoff between usage memory devices versus process variations, yield and area overhead; 3) long latency low efficiency; 4) small-signal margin large...

10.1109/jssc.2023.3314433 article EN IEEE Journal of Solid-State Circuits 2023-10-02

A 4K×2K H.265/HEVC video codec chip is fabricated in a 28nm CMOS process with core area of 2.16mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . This LSI integrates dual-standard (H.265 and H.264) series prevalent (VC-1, WMV-7/8/9, VP-6/8, AVS, RM-8/9/10, MPEG-2/4) decoders into single chip. It contains 3,558K logic gates 308KB internal SRAM. Moreover, it simplifies intra/inter-rate-distortion optimization (RDO) processes reduces...

10.1109/isscc.2015.7063063 article EN 2015-02-01

Artificial intelligence (AI) edge devices are highly susceptible to memory wall issues imposed by the von Neumann computing architecture. Non-volatile on-chip computing-in-memory (nvCIM) is a viable approach reducing energy consumption and latency in AI movement of intermediate data between unit. This paper presents an overview nvCIM macro structures., including in-memory (IMC) near-memory (NMC), as well challenges design these devices. also assesses performance recent silicon-verified...

10.1109/mwscas57524.2023.10405877 article EN 2022 IEEE 65th International Midwest Symposium on Circuits and Systems (MWSCAS) 2023-08-06
Coming Soon ...