Dataset and Metrics for Predicting Local Visible Differences
visual metric
distortion visibility
Visual perception
0202 electrical engineering, electronic engineering, information engineering
visual difference predictor
image quality
dataset
convolutional neural network
visual perception
02 engineering and technology
data-driven metric
DOI:
10.1145/3196493
Publication Date:
2018-11-27T13:18:59Z
AUTHORS (9)
ABSTRACT
A large number of imaging and computer graphics applications require localized information on the visibility of image distortions. Existing image quality metrics are not suitable for this task as they provide a single quality value per image. Existing visibility metrics produce visual difference maps, and are specifically designed for detecting just noticeable distortions but their predictions are often inaccurate. In this work, we argue that the key reason for this problem is the lack of large image collections with a good coverage of possible distortions that occur in different applications. To address the problem, we collect an extensive dataset of reference and distorted image pairs together with user markings indicating whether distortions are visible or not. We propose a statistical model that is designed for the meaningful interpretation of such data, which is affected by visual search and imprecision of manual marking. We use our dataset for training existing metrics and we demonstrate that their performance significantly improves. We show that our dataset with the proposed statistical model can be used to train a new CNN-based metric, which outperforms the existing solutions. We demonstrate the utility of such a metric in visually lossless JPEG compression, super-resolution and watermarking.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (34)
CITATIONS (26)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....