- Image and Video Quality Assessment
- Advanced Image Processing Techniques
- Advanced Image Fusion Techniques
- Image and Signal Denoising Methods
- Image Enhancement Techniques
- Visual Attention and Saliency Detection
- Random Matrices and Applications
- Video Coding and Compression Technologies
- Statistical Methods and Inference
- Bayesian Methods and Mixture Models
- Advanced Data Compression Techniques
- Advanced Vision and Imaging
- Advanced Statistical Methods and Models
- Statistical Methods and Bayesian Inference
- Color Science and Applications
- Advanced Combinatorial Mathematics
- Industrial Vision Systems and Defect Detection
- Image Retrieval and Classification Techniques
- Image Processing Techniques and Applications
- Infrared Target Detection Methodologies
- Advanced Image and Video Retrieval Techniques
- Point processes and geometric inequalities
- Advanced Algebra and Geometry
- Visual perception and processing mechanisms
- Advanced Neural Network Applications
Shanghai Jiao Tong University
2019-2025
University of Waterloo
2016-2025
University of Utah
2023-2025
New York University
2003-2025
Huazhong University of Science and Technology
2011-2025
Bozhou People's Hospital
2025
Anhui Medical University
2025
National University of Singapore
2015-2024
Jiangsu University of Science and Technology
2024
Tongren Hospital
2024
Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted and reference using variety known properties human visual system. Under assumption that perception is highly adapted extracting structural information from scene, we introduce an alternative complementary framework assessment based on degradation information. As specific example this concept, develop similarity index demonstrate its promise...
We propose a new universal objective image quality index, which is easy to calculate and applicable various processing applications. Instead of using traditional error summation methods, the proposed index designed by modeling any distortion as combination three factors: loss correlation, luminance distortion, contrast distortion. Although mathematically defined no human visual system model explicitly employed, our experiments on types indicate that it performs significantly better than...
The structural similarity image quality paradigm is based on the assumption that human visual system highly adapted for extracting information from scene, and therefore a measure of can provide good approximation to perceived quality. This paper proposes multiscale method, which supplies more flexibility than previous single-scale methods in incorporating variations viewing conditions. We develop an synthesis method calibrate parameters define relative importance different scales....
In this article, we have reviewed the reasons why (collectively) want to love or leave venerable (but perhaps hoary) MSE. We also emerging alternative signal fidelity measures and discussed their potential application a wide variety of problems. The message are trying send here is not that one should abandon use MSE nor blindly switch any other particular measure. Rather, hope make point there powerful, easy-to-use, easy-to-understand alternatives might be deployed depending on environment...
Many state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring quality/distortion, the pooling stage is often done ad-hoc ways, lacking theoretical principles and reliable computational models. This paper aims to test hypothesis that when viewing natural images, optimal weights for should be proportional information content, which can...
A new median-based filter, progressive switching median (PSM) is proposed to restore images corrupted by salt-pepper impulse noise. The algorithm developed the following two main points: 1) scheme-an detection used before filtering, thus only a proportion of all pixels will be filtered; and 2) methods-both noise filtering procedures are progressively applied through several iterations. Simulation results demonstrate that better than traditional filters particularly effective for cases where...
Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual assessment of multi-exposure fused images. In this paper, we first build MEF database and carry out a subjective user study evaluate images generated by different algorithms. There are several useful findings. First, considerable agreement observed among human subjects on Second, no single state-of-the-art...
Human observers can easily assess the quality of a distorted image without examining original as reference. By contrast, designing objective No-Reference (NR) measurement algorithms is very difficult task. Currently, NR assessment feasible only when prior knowledge about types distortion available. This research aims to develop for JPEG compressed images. First, we established database and subjective experiments were conducted on database. We show that Peak Signal-to-Noise Ratio (PSNR),...
Image quality assessment plays an important role in various image processing applications. A great deal of effort has been made recent years to develop objective metrics that correlate with perceived measurement. Unfortunately, only limited success achieved. In this paper, we provide some insights on why is so difficult by pointing out the weaknesses error sensitivity based framework, which used most approaches literature. Furthermore, propose a new philosophy designing metrics: The main...
The great content diversity of real-world digital images poses a grand challenge to image quality assessment (IQA) models, which are traditionally designed and validated on handful commonly used IQA databases with very limited variation. To test the generalization capability facilitate wide usage techniques in applications, we establish large-scale database named Waterloo Exploration Database, its current state contains 4744 pristine natural 94 880 distorted created from them. Instead...
We introduce a new measure of image similarity called the complex wavelet structural (CW-SSIM) index and show its applicability as general purpose index. The key idea behind CW-SSIM is that certain distortions lead to consistent phase changes in local coefficients, shift coefficients does not change content image. By conducting four case studies, we have demonstrated superiority against other indices (e.g., Dice, Hausdorff distance) commonly used for assessing given pair images. In addition,...
We propose a deep bilinear model for blind image quality assessment (BIQA) that handles both synthetic and authentic distortions. Our consists of two convolutional neural networks (CNN), each which specializes in one distortion scenario. For distortions, we pre-train CNN to classify type level, where enjoy large-scale training data. adopt pre-trained classification. The features from the CNNs are pooled bilinearly into unified representation final prediction. then fine-tune entire on target...
Tone-mapping operators (TMOs) that convert high dynamic range (HDR) to low (LDR) images provide practically useful tools for the visualization of HDR on standard LDR displays. Different TMOs create different tone-mapped images, and a natural question is which one has best quality. Without an appropriate quality measure, cannot be compared, further improvement directionless. Subjective rating may reliable evaluation method, but it expensive time consuming, more importantly, difficult embedded...
We propose a multi-task end-to-end optimized deep neural network (MEON) for blind image quality assessment (BIQA). MEON consists of two sub-networks-a distortion identification and prediction network-sharing the early layers. Unlike traditional methods used training networks, our process is performed in steps. In first step, we train type sub-network, which large-scale samples are readily available. second starting from pre-trained layers outputs sub-network using variant stochastic gradient...
This Lecture book is about objective image quality assessment—where the aim to provide computational models that can automatically predict perceptual quality. The early years of 21st cent
Since its introduction in 2004, the structural similarity (SSIM) index has gained widespread popularity as a tool to assess quality of images and evaluate performance image processing algorithms systems. There been also growing interest using SSIM an objective function optimization problems variety applications. One major issue that could strongly impede progress such efforts is lack understanding mathematical properties measure. For example, some highly desirable convexity triangular...
Reduced-reference (RR) image quality measures aim to predict the visual of distorted images with only partial information about reference images. In this paper, we propose an RR assessment method based on a natural statistic model in wavelet transform domain. particular, observe that marginal distribution coefficients changes different ways for types distortions. To quantify such changes, estimate Kullback-Leibler distance between distributions and A generalized Gaussian is employed...
The objective measurement of blocking artifacts plays an important role in the design, optimization, and assessment image video coding systems. We propose a new approach that can blindly measure images without reference to originals. key idea is model blocky as non-blocky interfered with pure signal. task effect algorithm then detect evaluate power proposed has flexibility integrate human visual system features such luminance texture masking effects.
Contrast is a fundamental attribute of images that plays an important role in human visual perception image quality. With numerous approaches proposed to enhance contrast, much less work has been dedicated automatic quality assessment contrast changed images. Existing rely on global statistics estimate Here we propose novel local patch-based objective method using adaptive representation patch structure, which allows us decompose any into its mean intensity, signal strength and structure...
Reduced-reference image quality assessment (RRIQA) methods estimate degradations with partial information about the ldquoperfect-qualityrdquo reference image. In this paper, we propose an RRIQA algorithm based on a divisive normalization representation. Divisive has been recognized as successful approach to model perceptual sensitivity of biological vision. It also provides useful representation that significantly improves statistical independence for natural images. By using Gaussian scale...
Contrast distortion is often a determining factor in human perception of image quality, but little investigation has been dedicated to quality assessment contrast-distorted images without assuming the availability perfect-quality reference image. In this letter, we propose simple effective method for no-reference contrast distorted based on principle natural scene statistics (NSS). A large scale database employed build NSS models moment and entropy features. The then evaluated its...