- Image Enhancement Techniques
- Advanced Image Processing Techniques
- Advanced Image Fusion Techniques
- Advanced Vision and Imaging
- Speech Recognition and Synthesis
- Speech and Audio Processing
- Image and Signal Denoising Methods
- Music and Audio Processing
- Visual Attention and Saliency Detection
- Natural Language Processing Techniques
- Video Surveillance and Tracking Methods
- Advanced Neural Network Applications
- Advanced Image and Video Retrieval Techniques
- Color Science and Applications
- Nuclear physics research studies
- Image Retrieval and Classification Techniques
- Topic Modeling
- Sparse and Compressive Sensing Techniques
- Neural Networks and Applications
- Meat and Animal Product Quality
- Stochastic Gradient Optimization Techniques
- 3D Shape Modeling and Analysis
- Advanced Computational Techniques and Applications
- Generative Adversarial Networks and Image Synthesis
- Identification and Quantification in Food
Dalian University of Technology
2018-2025
Chinese Academy of Sciences
2005-2025
High Magnetic Field Laboratory
2024-2025
Ningxia Medical University
2023-2025
Tencent (China)
2022-2025
Bengbu Medical College
2005-2024
Bengbu University
2017-2024
Shenyang Jianzhu University
2021-2024
Shandong University
2021-2024
Northwestern Polytechnical University
2022-2024
Low-light image enhancement plays very important roles in low-level vision areas. Recent works have built a great deal of deep learning models to address this task. However, these approaches mostly rely on significant architecture engineering and suffer from high computational burden. In paper, we propose new method, named Retinex-inspired Unrolling with Architecture Search (RUAS), construct lightweight yet effective network for low-light images real-world scenario. Specifically, building...
Existing low-light image enhancement techniques are mostly not only difficult to deal with both visual quality and computational efficiency but also commonly invalid in unknown complex scenarios. In this paper, we develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, robust brightening images real-world To be specific, establish cascaded illumination process weight sharing handle task. Considering the burden of pattern, construct self-calibrated module which...
Axicabtagene ciloleucel (axi-cel) is a standard-of-care for patients with relapsed or refractory (r/r) large B cell lymphoma who have received 2 more lines of prior therapy. Patients receiving axi-cel in the real world could broader demographic, disease, and treatment profile compared that cohort pivotal ZUMA-1 trial. The present study was conducted to evaluate outcomes therapy real-world setting. A total 1297 commercial between 2017 2020 were selected from Center International Blood Marrow...
Multi-modality image fusion and segmentation play a vital role in autonomous driving robotic operation. Early efforts focus on boosting the performance for only one task, e.g., or segmentation, making it hard to reach 'Best of Both Worlds'. To overcome this issue, paper, we propose Multi-interactive Feature learning architecture Segmentation, namely SegMiF, exploit dual-task correlation promote both tasks. The SegMiF is cascade structure, containing sub-network commonly used sub-network. By...
Enhancing the quality of low-light (LOL) images plays a very important role in many image processing and multimedia applications. In recent years, variety deep learning techniques have been developed to address this challenging task. A typical framework is simultaneously estimate illumination reflectance, but they disregard scene-level contextual information encapsulated feature spaces, causing unfavorable outcomes, e.g., details loss, color unsaturation, artifacts. To these issues, we...
Low-light image enhancement aims to improve the quality of images captured under low-lightening conditions, which is a fundamental problem in computer vision and multimedia areas. Although many efforts have been invested over years, existing illumination-based models tend generate unnatural-looking results (e.g., over-exposure). It because that widely-adopted illumination adjustment Gamma Correction) breaks down favorable smoothness property original derived from well-designed estimation...
Recently, multi-modality scene perception tasks, e.g., image fusion and understanding, have attracted widespread attention for intelligent vision systems. However, early efforts always consider boosting a single task unilaterally neglecting others, seldom investigating their underlying connections joint promotion. To overcome these limitations, we establish the hierarchical dual tasks-driven deep model to bridge tasks. Concretely, firstly construct an module fuse complementary...
Restoring high-quality images from degraded hazy observations is a fundamental and essential task in the field of computer vision. While deep models have achieved significant success with synthetic data, their effectiveness real-world scenarios remains uncertain. To improve adaptability environments, we construct an entirely new computational framework by making efforts three key aspects: imaging perspective, structural modules, training strategies. simulate often-overlooked multiple...
Infrared-visible image fusion (IVIF) is a fundamental and critical task in the field of computer vision. Its aim to integrate unique characteristics both infrared visible spectra into holistic representation. Since 2018, growing amount diversity IVIF approaches step deep-learning era, encompassing introduced broad spectrum networks or loss functions for improving visual enhancement. As research deepens practical demands grow, several intricate issues like data compatibility, perception...
Deep learning models have gained great success in many real-world applications. However, most existing networks are typically designed heuristic manners, thus these approaches lack of rigorous mathematical derivations and clear interpretations. Several recent studies try to build deep by unrolling a particular optimization model that involves task information. Unfortunately, due the dynamic nature network parameters, their resultant propagations do not possess nice convergence property as...
Images captured from low-light scenes often suffer severe degradations, including low visibility, color casts, intensive noises, etc. These factors not only degrade image qualities, but also affect the performance of downstream Low-Light Vision (LLV) applications. A variety deep networks have been proposed to enhance visual quality images. However, they mostly rely on significant architecture engineering and high computational burden. More importantly, it still lacks an efficient paradigm...
Object detection in low-light scenarios has attracted much attention the past few years. A mainstream and representative scheme introduces enhancers as pre-processing for regular detectors. However, because of disparity task objectives between enhancer detector, this paradigm cannot shine at its best ability. In work, we try to arouse potential + detector. Different from existing works, extend illumination-based (our newly designed or existing) a scene decomposition module, whose removed...
Enhancing visual qualities of images plays very important roles in various vision and learning applications. In the past few years, both knowledge-driven maximum a posterior (MAP) with prior modelings fully data-dependent convolutional neural network (CNN) techniques have been investigated to address specific enhancement tasks. this paper, by exploiting advantages these two types mechanisms within complementary propagation perspective, we propose unified framework, named deep ensemble (DPE),...
Recently, learning-based works have been widely-investigated to enhance underwater images. However, interactions between various degradation factors (e.g., color distortion and haze effects) inevitably cause negative interference during the inference phase. Thus, these cannot fully remove degraded factors. To address this problem, we propose a novel Joint Luminance Chrominance Learning Network (JLCL-Net). Concretely, reformulate task as luminance reconstruction (for removal), chrominance...
Infrared and visible image fusion is a powerful technique that combines complementary information from different modalities for downstream semantic perception tasks. Existing learning-based methods show remarkable performance, but are suffering the inherent vulnerability of adversarial attacks, causing significant decrease in accuracy. In this work, perception-aware framework proposed to promote segmentation robustness scenes. We first conduct systematic analyses about components fusion,...
Improving the visual quality of given degraded observation by correcting exposure level is a fundamental task in computer vision community. Existing works commonly lack adaptability towards unknown scenes because data-driven patterns (deep networks) and limited regularization (traditional optimization), they usually need time-consuming inference. These two points heavily limit their practicability. In this paper, we establish Practical Exposure Corrector (PEC) that assembles characteristics...
Infrared-visible image fusion (IVIF) is a critical task in computer vision, aimed at integrating the unique features of both infrared and visible spectra into unified representation. Since 2018, field has entered deep learning era, with an increasing variety approaches introducing range networks loss functions to enhance visual performance. However, challenges such as data compatibility, perception accuracy, efficiency remain. Unfortunately, there lack recent comprehensive surveys that...