Hyunmin Jeong

ORCID: 0000-0001-7824-0993
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Parallel Computing and Optimization Techniques
  • Embedded Systems Design Techniques
  • Advanced Neural Network Applications
  • Low-power high-performance VLSI design
  • Advanced Memory and Neural Computing
  • Tensor decomposition and applications
  • Neural Networks and Applications
  • Computational Physics and Python Applications
  • CCD and CMOS Imaging Sensors
  • Quantum Mechanics and Applications
  • Ferroelectric and Negative Capacitance Devices
  • Quantum Information and Cryptography
  • Cloud Computing and Resource Management
  • Domain Adaptation and Few-Shot Learning
  • VLSI and Analog Circuit Testing
  • VLSI and FPGA Design Techniques
  • Advanced Image and Video Retrieval Techniques
  • Cold Atom Physics and Bose-Einstein Condensates
  • Distributed and Parallel Computing Systems

University of Illinois Urbana-Champaign
2021-2023

Kyung Hee University
2013

Yonsei University
2013

High-level synthesis (HLS) has been widely adopted as it significantly improves the hardware design productivity and enables efficient space exploration (DSE). Existing HLS tools are built using compiler infrastructures largely based on a single-level abstraction, such LLVM. How-ever, designs typically come with intrinsic structural or functional hierarchies, different optimization problems often better solved levels of abstractions. This paper proposes ScaleHLS <sup>1</sup>, new scalable...

10.1109/hpca53966.2022.00060 preprint EN 2022-04-01

This paper presents an enhanced version of a scalable HLS (High-Level Synthesis) framework named ScaleHLS, which can compile C/C++ programs and PyTorch models to highly-efficient synthesizable C++ designs. The original ScaleHLS achieved significant speedup on both kernels [14]. In this paper, we first highlight the key features tackling challenges present in representation, optimization, exploration large-scale To further improve scalability then propose transform analysis library supported...

10.1145/3489517.3530631 article EN Proceedings of the 59th ACM/IEEE Design Automation Conference 2022-07-10

The exploding complexity and computation efficiency requirements of applications are stimulating a strong demand for hardware acceleration with heterogeneous platforms such as FPGAs. However, high-quality FPGA design is very hard to create it requires expertise long iteration time. In contrast, software typically developed in short development cycle, high-level languages like Python, which at much higher level abstraction than all existing flows. To close this gap simplify programming, we...

10.1109/tc.2021.3123465 article EN cc-by IEEE Transactions on Computers 2021-01-01

High-Level Synthesis (HLS) has enabled users to rapidly develop designs targeted for FPGAs from the behavioral description of design. However, synthesize an optimal design capable taking better advantage target FPGA, a considerable amount effort is needed transform initial into form that can capture desired level parallelism. Thus, space exploration (DSE) engine optimizing large complex achieve this goal. We present new DSE considering code transformation, compiler directives (pragmas), and...

10.1145/3572959 article EN ACM Transactions on Reconfigurable Technology and Systems 2023-02-15

The exploding complexity and computation efficiency requirements of applications are stimulating a strong demand for hardware acceleration with heterogeneous platforms such as FPGAs. However, high-quality FPGA design is very hard to create it requires expertise long iteration time. In contrast, software typically developed in short development cycle, high-level languages like Python, which at much higher level abstraction than all existing flows. To close this gap between flows applications,...

10.1145/3431920.3439478 article EN 2021-02-17

High-level synthesis (HLS) has been widely adopted as it significantly improves the hardware design productivity and enables efficient space exploration (DSE). Existing HLS tools are built using compiler infrastructures largely based on a single-level abstraction, such LLVM. However, designs typically come with intrinsic structural or functional hierarchies, different optimization problems often better solved levels of abstractions. This paper proposes ScaleHLS, new scalable customizable...

10.48550/arxiv.2107.11673 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Compression technologies for deep neural networks (DNNs), such as weight quantization, have been widely investigated to reduce the model size so that they can be implemented on hardware with strict resource restrictions. However, one major downside of compression is accuracy degradation. To deal this problem effectively, we propose a new compressed network inference scheme high but slower DNN coupled its highly version typically delivers much faster speed lower accuracy. During inference,...

10.1109/asap52443.2021.00027 article EN 2021-07-01

10.1007/s10773-013-1869-8 article EN International Journal of Theoretical Physics 2013-10-25

Machine learning is one of the most popular fields in current era. It used various areas, such as speech recognition, face medical diagnosis, etc. However, problem that neural networks for machine applications are becoming too large and slow they get more complicated powerful. This gets especially serious when edge devices with a small chip. As result, researchers have proposed two major solutions to solve this problem.

10.1109/fccm51124.2021.00061 article EN 2021-05-01
Coming Soon ...