- Advanced Neural Network Applications
- Domain Adaptation and Few-Shot Learning
- Adversarial Robustness in Machine Learning
- Advanced Image Processing Techniques
- Generative Adversarial Networks and Image Synthesis
- Anomaly Detection Techniques and Applications
- Machine Learning and Data Classification
- Advanced Vision and Imaging
- Industrial Vision Systems and Defect Detection
- Human Pose and Action Recognition
- Multimodal Machine Learning Applications
- Immune Response and Inflammation
- Image Processing Techniques and Applications
- Image and Signal Denoising Methods
- Advanced Image and Video Retrieval Techniques
- Remote Sensing and LiDAR Applications
- Animal Virus Infections Studies
- Advanced Image Fusion Techniques
- Advanced Algorithms and Applications
- Digital Media Forensic Detection
- Viral gastroenteritis research and epidemiology
- Neural Networks and Applications
- Image Retrieval and Classification Techniques
- Medical Image Segmentation Techniques
- Advanced Multi-Objective Optimization Algorithms
Fujian Polytechnic of Information Technology
2023-2025
South China University of Technology
2005-2024
Guilin Medical University
2024
Max Planck Institute for Informatics
2022-2024
Huawei Technologies (Canada)
2024
Tencent (China)
2023
Chongqing Institute of Green and Intelligent Technology
2022
Peng Cheng Laboratory
2021
Wuchang University of Technology
2021
Wuchang Shouyi University
2021
Deep neural networks have exhibited promising performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. However, there are two underlying limitations existing SR methods. First, the LR HR is typically an ill-posed problem, because exist infinite that can be downsampled same image. As result, space of possible functions extremely large, which makes it hard find good solution. Second, paired LR-HR data...
We study network pruning which aims to remove redundant channels/kernels and hence speed up the inference of deep networks.Existing methods either train from scratch with sparsity constraints or minimize reconstruction error between feature maps pre-trained models compressed ones.Both strategies suffer some limitations: former kind is computationally expensive difficult converge, while latter optimizes but ignores discriminative power channels.In this paper, we propose a simple-yet-effective...
Channel pruning is one of the predominant approaches for deep model compression. Existing methods either train from scratch with sparsity constraints on channels, or minimize reconstruction error between pre-trained feature maps and compressed ones. Both strategies suffer some limitations: former kind computationally expensive difficult to converge, whilst latter optimizes but ignores discriminative power channels. To overcome these drawbacks, we investigate a simple-yet-effective method,...
Generating images via a generative adversarial network (GAN) has attracted much attention recently. However, most of the existing GAN-based methods can only produce lowresolution limited quality. Directly generating highresolution using GANs is nontrivial, and often produces problematic with incomplete objects. To address this issue, we develop novel GAN called auto-embedding network, which simultaneously encodes global structure features captures fine-grained details. In our use an...
Designing effective architectures is one of the key factors behind success deep neural networks. Existing are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-searched architecture may still contain many non-significant redundant modules operations (e.g., convolution pooling), which not only incur substantial memory consumption and computation cost but also deteriorate performance. Thus, it necessary to optimize inside...
Deep neural networks have exhibited promising performance in image super-resolution (SR). Most SR models follow a hierarchical architecture that contains both the cell-level design of computational blocks and network-level positions upsampling blocks. However, designing heavily relies on human expertise is very labor-intensive. More critically, these often contain huge number parameters may not meet requirements computation resources real-world applications. To address above issues, we...
One of the key steps in Neural Architecture Search (NAS) is to estimate performance candidate architectures. Existing methods either directly use validation or learn a predictor performance. However, these can be computationally expensive very inaccurate, which may severely affect search efficiency and Moreover, as it difficult annotate architectures with accurate on specific tasks, learning promising often non-trivial due lack labeled data. In this paper, we argue that not necessary...
Semantic segmentation requires a large amount of densely annotated data for training and may generalize poorly to novel categories. In real-world applications, we have an urgent need few-shot semantic which aims empower model handle unseen object categories with limited data. This task is non-trivial due several challenges. First, it difficult extract the class-relevant information class as only few samples are available. Second, since image content can be very complex, suppressed by base...
In view of the problem that traditional fingerprint devices are prone to incomplete information due surface noise pollution (such as scratches and peeling) when collecting fingerprints on tip finger, a method using Optical Coherence Tomography (OCT) collect subcutaneous internal compensate external is proposed. experiment, device used obtain fingertip, defective was obtained by erasing some with image processing technology. Then, OCT fingerprint, features compared fingerprints. The rotated...
Designing effective architectures is one of the key factors behind success deep neural networks. Existing are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-designed/searched architecture may still contain many nonsignificant redundant modules/operations (e.g., intermediate convolution pooling layers). Such redundancy not only incur substantial memory consumption and computational cost but also deteriorate...
Batch Normalization (BN) has been a standard component in designing deep neural networks (DNNs). Although the BN can significantly accelerate training of DNNs and improve generalization performance, it several underlying limitations which may hamper performance both inference. In stage, relies on estimating mean variance data using single mini-batch. Consequently, be unstable when batch size is very small or poorly sampled. inference often uses so called moving instead statistics, i.e.,...
Introduction Accurate classification of single-trial electroencephalogram (EEG) is crucial for EEG-based target image recognition in rapid serial visual presentation (RSVP) tasks. P300 an important component a EEG RSVP However, are usually characterized by low signal-to-noise ratio and limited sample sizes. Methods Given these challenges, it necessary to optimize existing convolutional neural networks (CNNs) improve the performance classification. The proposed CNN model called PSAEEGNet,...
Generative adversarial networks (GANs) aim to generate realistic data from some prior distribution (e.g., Gaussian noises). However, such is often independent of real and thus may lose semantic information geometric structure or content in images) data. In practice, the might be represented by latent learned data, which, however, hard used for sampling GANs. this paper, rather than pre-defined distribution, we propose a Local Coordinate Coding (LCC) based method improve We derive...
Model compression aims to reduce the redundancy of deep networks obtain compact models. Recently, channel pruning has become one predominant methods deploy models on resource-constrained devices. Most often use a fixed rate for all layers model, which, however, may not be optimal. To address this issue, given target whole can search optimal each layer. Nevertheless, these perform specific rate. When we consider multiple rates, they have repeat process times, which is very inefficient yet...
Deep neural networks have exhibited promising performance in image super-resolution (SR) due to the power learning non-linear mapping from low-resolution (LR) images high-resolution (HR) images. However, most deep methods employ feed-forward architectures, and thus dependencies between LR HR are not fully exploited, leading limited performance. Moreover, based SR apply pixel-wise reconstruction error as loss, which, however, may fail capture high-frequency information produce perceptually...