A Feature-Enriched Completely Blind Image Quality Evaluator

被引:921
作者
Zhang, Lin [1 ,2 ]
Zhang, Lei [3 ]
Bovik, Alan C. [4 ]
机构
[1] Tongji Univ, Sch Software Engn, Shanghai 201804, Peoples R China
[2] Shenzhen Inst Future Media Technol, Shenzhen 518055, Peoples R China
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Hong Kong, Peoples R China
[4] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
基金
美国国家科学基金会;
关键词
Blind image quality assessment; natural image statistics; multivariate Gaussian; NATURAL SCENE STATISTICS; ARTIFACTS; COLOR;
D O I
10.1109/TIP.2015.2426416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/similar to cslzhang/IQA/ILNIQE/ILNIQE.htm.
引用
收藏
页码:2579 / 2591
页数:13
相关论文
共 48 条
[1]   No-Reference Image Sharpness Assessment in Autoregressive Parameter Space [J].
Gu, Ke ;
Zhai, Guangtao ;
Lin, Weisi ;
Yang, Xiaokang ;
Zhang, Wenjun .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (10) :3218-3231
[2]   No-reference JPEG-image quality assessment using GAP-RBF [J].
Babu, R. Venkatesh ;
Suresh, S. ;
Perkis, Andrew .
SIGNAL PROCESSING, 2007, 87 (06) :1493-1503
[3]   A No-Reference Objective Image Sharpness Metric Based on the Notion of Just Noticeable Blur (JNB) [J].
Ferzli, Rony ;
Karam, Lina J. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2009, 18 (04) :717-728
[4]   RELATIONS BETWEEN THE STATISTICS OF NATURAL IMAGES AND THE RESPONSE PROPERTIES OF CORTICAL-CELLS [J].
FIELD, DJ .
JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 1987, 4 (12) :2379-2394
[5]  
Fukunaga K., 1972, Introduction to statistical pattern recognition
[6]   A six-stimulus theory for stochastic texture [J].
Geusebroek, JM ;
Smeulders, AWM .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2005, 62 (1-2) :7-16
[7]   Color invariance [J].
Geusebroek, JM ;
van den Boomgaard, R ;
Smeulders, AWM ;
Geerts, H .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2001, 23 (12) :1338-1350
[8]  
Geusebroek JM, 2000, LECT NOTES COMPUT SC, V1842, P331
[9]  
Jayaraman D, 2012, CONF REC ASILOMAR C, P1693, DOI 10.1109/ACSSC.2012.6489321
[10]  
Kovesi P.D., 1999, Videre: Journal of Computer Vision Research, V1