Xiaojie Guo

ORCID: 0000-0002-0326-8382
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Image Enhancement Techniques
  • Advanced Image Processing Techniques
  • Advanced Image and Video Retrieval Techniques
  • Advanced Vision and Imaging
  • Advanced Image Fusion Techniques
  • Video Surveillance and Tracking Methods
  • Sparse and Compressive Sensing Techniques
  • Robotics and Sensor-Based Localization
  • Face recognition and analysis
  • Image and Signal Denoising Methods
  • Image Retrieval and Classification Techniques
  • Visual Attention and Saliency Detection
  • Face and Expression Recognition
  • Advanced Neural Network Applications
  • Generative Adversarial Networks and Image Synthesis
  • Remote-Sensing Image Classification
  • Domain Adaptation and Few-Shot Learning
  • Formal Methods in Verification
  • Multimodal Machine Learning Applications
  • Advanced Steganography and Watermarking Techniques
  • Image Processing Techniques and Applications
  • Real-Time Systems Scheduling
  • Digital Media Forensic Detection
  • Blind Source Separation Techniques
  • Embedded Systems Design Techniques

Tianjin University
2011-2024

Jiaozuo University
2024

Nanjing University of Science and Technology
2023

Université Grenoble Alpes
2017-2023

Centre National de la Recherche Scientifique
2017-2023

Institut polytechnique de Grenoble
2023

Laboratoire d'Informatique de Grenoble
2023

Centre Inria de l'Université Grenoble Alpes
2023

Technical University of Denmark
2022

Tsinghua University
2022

When one captures images in low-light conditions, the often suffer from low visibility. Besides degrading visual aesthetics of images, this poor quality may also significantly degenerate performance many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In paper, we propose a simple yet effective image enhancement (LIME) method. More concretely, illumination each pixel is first estimated individually by finding maximum value R, G, B channels....

10.1109/tip.2016.2639450 article EN publisher-specific-oa IEEE Transactions on Image Processing 2016-12-14

Images captured under low-light conditions often suffer from (partially) poor visibility. Besides unsatisfactory lightings, multiple types of degradations, such as noise and color distortion due to the limited quality cameras, hide in dark. In other words, solely turning up brightness dark regions will inevitably amplify hidden artifacts. This work builds a simple yet effective network for Kindling Darkness (denoted KinD), which, inspired by Retinex theory, decomposes images into two...

10.1145/3343031.3350926 article EN Proceedings of the 30th ACM International Conference on Multimedia 2019-10-15

This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility an image and introduce undesirable interference that can severely affect performance computer vision algorithms. be formulated as layer decomposition problem, with superimposed on background containing true scene content. Existing methods address this employ either dictionary learning or impose low rank structure appearance streaks. While these improve overall visibility, they tend to...

10.1109/cvpr.2016.299 article EN 2016-06-01

10.1007/s11263-018-1117-z article EN International Journal of Computer Vision 2018-09-22

10.1007/s11263-020-01407-x article EN International Journal of Computer Vision 2021-01-06

In this paper, we propose a fast unified image fusion network based on proportional maintenance of gradient and intensity (PMGI), which can end-to-end realize variety tasks, including infrared visible fusion, multi-exposure medical multi-focus pan-sharpening. We unify the problem into texture source images. On one hand, is divided path for information extraction. perform feature reuse in same to avoid loss due convolution. At time, introduce pathwise transfer block exchange between different...

10.1609/aaai.v34i07.6975 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2020-04-03

Reversible data hiding in encrypted images has attracted considerable attention from the communities of privacy security and protection. The success previous methods this area shown that a superior performance can be achieved by exploiting redundancy within image. Specifically, because pixels local structures (like patches or regions) have strong similarity, they heavily compressed, thus resulting large room. In paper, to better explore correlation between neighbor pixels, we propose...

10.1109/tcyb.2015.2423678 article EN IEEE Transactions on Cybernetics 2015-04-30

In this paper, we present a new unsupervised and unified densely connected network for different types of image fusion tasks, termed as FusionDN. our method, the is trained to generate fused conditioned on source images. Meanwhile, weight block applied obtain two data-driven weights retention degrees features in images, which are measurement quality amount information them. Losses similarities based these learning. addition, single model applicable multiple tasks by applying elastic...

10.1609/aaai.v34i07.6936 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2020-04-03

We present a comprehensive study and evaluation of existing single image deraining algorithms, using new large-scale benchmark consisting both synthetic real-world rainy images.This dataset highlights diverse data sources contents, is divided into three subsets (rain streak, rain drop, mist), each serving different training or purposes. further provide rich variety criteria for dehazing algorithm evaluation, ranging from full-reference metrics, to no-reference subjective the novel...

10.1109/cvpr.2019.00396 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019-06-01

Feature matching, which refers to establishing reliable correspondences between two sets of feature points, is a critical prerequisite in feature-based image registration. This paper proposes simple yet surprisingly effective approach, termed as guided locality preserving for robust matching remote sensing images. The key idea our approach merely preserve the neighborhood structures potential true matches We formulate it into mathematical model, and derive closed-form solution with...

10.1109/tgrs.2018.2820040 article EN IEEE Transactions on Geoscience and Remote Sensing 2018-04-18

This study proposes a novel unified and unsupervised end-to-end image fusion network, termed as U2Fusion, which is capable of solving different problems, including multi-modal, multi-exposure, multi-focus cases. Using feature extraction information measurement, U2Fusion automatically estimates the importance corresponding source images comes up with adaptive preservation degrees. Hence, tasks are in same framework. Based on degrees, network trained to preserve similarity between result...

10.1109/tpami.2020.3012548 article EN publisher-specific-oa IEEE Transactions on Pattern Analysis and Machine Intelligence 2020-07-28

Multi-view subspace clustering aims to partition a set of multi-source data into their underlying groups. To boost the performance multi-view clustering, numerous learning algorithms have been developed in recent years, but with rare exploitation representation complementarity between different views as well indicator consistency among representations, let alone considering them simultaneously. In this paper, we propose novel model that attempts harness complementary information...

10.1109/cvpr.2017.8 article EN 2017-07-01

Low-rank matrix approximation has been successfully applied to numerous vision problems in recent years. In this paper, we propose a novel low-rank prior for blind image deblurring. Our key observation is that directly applying simple model blurry input significantly reduces the blur even without using any kernel information, while preserving important edge information. The same can be used reduce gradient map of input. Based on these properties, introduce an enhanced deblurring by combining...

10.1109/tip.2016.2571062 article EN IEEE Transactions on Image Processing 2016-05-20

Feature matching, which refers to establishing reliable correspondence between two sets of features, is a critical prerequisite in wide spectrum vision-based tasks. Existing attempts typically involve the mismatch removal from set putative matches based on estimating underlying image transformation. However, transformation could vary with different data. Thus, pre-defined model often demanded, severely limits applicability. From novel perspective, this paper casts into two-class...

10.1109/tip.2019.2906490 article EN IEEE Transactions on Image Processing 2019-03-20

To overcome the overfitting issue of dehazing models trained on synthetic hazy-clean image pairs, many recent methods attempted to improve models' generalization ability by training unpaired data. Most them simply formulate and rehazing cycles, yet ignore physical properties real-world hazy environment, i.e. haze varies with density depth. In this paper, we propose a self-augmented framework, termed D4 (Dehazing via Decomposing transmission map into Density Depth) for generation removal....

10.1109/cvpr52688.2022.00208 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

This paper focuses on removing mismatches from given putative feature matches created typically based descriptor similarity. To achieve this goal, existing attempts usually involve estimating the image transformation under a geometrical constraint, where pre-defined model is demanded. severely limits applicability, as could vary with different data and complex hard to in many real-world tasks. From novel perspective, casts matching into spatial clustering problem outliers. The main idea...

10.1109/tip.2019.2934572 article EN IEEE Transactions on Image Processing 2019-08-26

10.1007/s11263-022-01667-9 article EN International Journal of Computer Vision 2022-10-02

When one records a video/image sequence through transparent medium (e.g. glass), the image is often superposition of transmitted layer (scene behind medium) and reflected layer. Recovering two layers from such images seems to be highly ill-posed problem since number unknowns recover twice as many given measurements. In this paper, we propose robust method separate these multiple images, which exploits correlation across sparsity independence gradient fields layers. A novel Augmented...

10.1109/cvpr.2014.281 article EN 2009 IEEE Conference on Computer Vision and Pattern Recognition 2014-06-01

Moving object detection is one of the most fundamental tasks in computer vision. Many classic and contemporary algorithms work well under assumption that backgrounds are stationary movements continuous, but degrade sharply when they used a real sreal systemystem, mainly due to: 1) dynamic background (e.g., swaying trees, water ripples fountains scenarios, as raindrops snowflakes bad weather) 2) irregular movement (like lingering objects). This paper presents unified framework for addressing...

10.1109/tcyb.2015.2419737 article EN IEEE Transactions on Cybernetics 2015-04-20

Filtering images is required by numerous multimedia, computer vision and graphics tasks. Despite diverse goals of different tasks, making effective rules key to the filtering performance. Linear translation-invariant filters with manually designed kernels have been widely used. However, their performance suffers from content-blindness. To mitigate content-blindness, a family filters, called joint/guided attracted great amount attention community. The main drawback most comes ignorance...

10.1109/tpami.2018.2883553 article EN publisher-specific-oa IEEE Transactions on Pattern Analysis and Machine Intelligence 2018-11-28
Coming Soon ...