Image Quality Assessment: Unifying Structure and Texture Similarity

被引:502
作者
Ding, Keyan [1 ]
Ma, Kede [1 ]
Wang, Shiqi [1 ]
Simoncelli, Eero P. [2 ,3 ,4 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Kowloon, Hong Kong, Peoples R China
[2] Simons Fdn, Flatiron Inst, New York, NY 10003 USA
[3] NYU, Ctr Neural Sci, New York, NY 10003 USA
[4] NYU, Courant Inst Math Sci, New York, NY 10003 USA
基金
中国国家自然科学基金;
关键词
Visualization; Image quality; Distortion measurement; Nonlinear distortion; Indexes; Databases; Convolution; Image quality assessment; structure similarity; texture similarity; perceptual optimization; MODEL; VISIBILITY; FEATURES; SCALE;
D O I
10.1109/TPAMI.2020.3045810
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Objective measures of image quality generally operate by comparing pixels of a "degraded" image to those of the original. Relative to human observers, these measures are overly sensitive to resampling of texture regions (e.g., replacing one patch of grass with another). Here, we develop the first full-reference image quality model with explicit tolerance to texture resampling. Using a convolutional neural network, we construct an injective and differentiable function that transforms images to multi-scale overcomplete representations. We demonstrate empirically that the spatial averages of the feature maps in this representation capture texture appearance, in that they provide a set of sufficient statistical constraints to synthesize a wide variety of texture patterns. We then describe an image quality method that combines correlations of these spatial averages ("texture similarity") with correlations of the feature maps ("structure similarity"). The parameters of the proposed measure are jointly optimized to match human ratings of image quality, while minimizing the reported distances between subimages cropped from the same texture images. Experiments show that the optimized method explains human perceptual scores, both on conventional image quality databases, as well as on texture databases. The measure also offers competitive performance on related tasks such as texture classification and retrieval. Finally, we show that our method is relatively insensitive to geometric transformations (e.g., translation and dilation), without use of any specialized training or data augmentation. Code is available at https://github.com/dingkeyan93/DISTS.
引用
收藏
页码:2567 / 2581
页数:15
相关论文
共 80 条
[1]  
Abdelmounaime Safia, 2013, ISRN Machine Vision, DOI 10.1155/2013/876386
[2]  
Alfarraj M, 2016, IEEE INT WORKSH MULT
[3]  
[Anonymous], 2014, COMPUT RES REPOSITOR
[4]  
[Anonymous], 2016, INT C LEARN REPR
[5]  
Balle J., 2017, P INT C LEARN REPR I, P1
[6]   Models for Static and Dynamic Texture Synthesis in Image and Video Compression [J].
Balle, Johannes ;
Stojanovic, Aleksandar ;
Ohm, Jens-Rainer .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2011, 5 (07) :1353-1365
[7]  
Behrmann J., 2019, INT C MACHINE LEARNI, P573
[8]  
Berardino A, 2017, ADV NEUR IN, V30
[9]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[10]   Invariant Scattering Convolution Networks [J].
Bruna, Joan ;
Mallat, Stephane .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1872-1886