Kaijie Tu

ORCID: 0000-0002-8112-0906
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Neural Network Applications
  • Advanced Memory and Neural Computing
  • Advanced Vision and Imaging
  • Machine Learning and ELM
  • Ferroelectric and Negative Capacitance Devices
  • Image and Signal Denoising Methods
  • Parallel Computing and Optimization Techniques
  • Advanced Image and Video Retrieval Techniques
  • Generative Adversarial Networks and Image Synthesis
  • Image Enhancement Techniques
  • Digital Media Forensic Detection
  • Advanced Image Processing Techniques
  • Image Processing Techniques and Applications

Institute of Computing Technology
2018-2022

University of Chinese Academy of Sciences
2022

Chinese Academy of Sciences
2018-2021

Unlike standard Convolutional Neural Networks (CNNs) with fully-connected layers, Fully (FCN) are prevalent in computer vision applications such as object detection, semantic/image segmentation, and the most popular generative tasks based on Generative Adversarial (GAN). In an FCN, traditional convolutional layers deconvolutional contribute to majority of computation complexity. However, prior deep learning accelerator designs mostly focus CNN optimization. They either use independent...

10.1145/3240765.3240810 article EN 2018-11-05

Bit-serial architectures (BSAs) are becoming increasingly popular in low power neural network processor (NNP) design. However, the performance and efficiency of state-of-the-art BSA NNPs heavily depending on distribution ineffectual weight-bits running network. To boost third-party accelerators, this work presents Bit-Pruner, a software approach to learn BSA-favored networks without resorting hardware modifications. The techniques proposed not only progressively prune but also structure...

10.1109/dac18072.2020.9218534 article EN 2020-07-01

Generative neural network is a new category of networks and it has been widely utilized in many applications such as content generation, unsupervised learning, segmentation, pose estimation. It typically involves massive computing-intensive deconvolution operations that cannot be fitted to conventional processors directly. However, prior works mainly investigated specialized hardware architectures through intensive modifications the existing deep learning accelerate together with...

10.1109/tc.2020.3001033 article EN IEEE Transactions on Computers 2020-01-01

In DNN processors, main memory consumes much more energy than arithmetic operations. Therefore, many memory-oriented network scheduling (MONS) techniques are introduced to exploit on-chip data reuse opportunities and reduce accesses memory. However, derive the theoretical lower bound of overhead for DNNs is still a significant challenge, which also sheds light on how reach memory-level optimality by means scheduling. Prior work MONS mainly focused disparate optimization or missed some...

10.1109/tc.2021.3112262 article EN IEEE Transactions on Computers 2021-01-01

Bit-serial architectures (BSAs) are becoming increasingly popular in low-power neural network processor (NNP) designs for edge scenarios. However, the performance and energy efficiency of state-of-the-art BSA NNPs heavily depends on both proportion distribution ineffectual weight bits networks (NNs). To boost typical accelerators, we present Bit-Pruner, a software approach to learn BSA-favored NNs without resorting hardware modifications. Bit-Pruner not only progressively prunes but also...

10.1109/tcad.2022.3203955 article EN IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 2022-09-01

Generative neural network is a new category of networks and it has been widely utilized in applications such as content generation, unsupervised learning, segmentation pose estimation. It typically involves massive computing-intensive deconvolution operations that cannot be fitted to conventional processors directly. However, prior works mainly investigated specialized hardware architectures through intensive modifications the existing deep learning accelerate together with convolution. In...

10.48550/arxiv.1907.01773 preprint EN other-oa arXiv (Cornell University) 2019-01-01
Coming Soon ...