Bridge the Gap Between Full-Reference and No-Reference: A Totally Full-Reference Induced Blind Image Quality Assessment via Deep Neural Networks

被引:1
作者
Ma, Xiaoyu [1 ,2 ]
Zhang, Suiyu [1 ]
Liu, Chang [1 ]
Yu, Dingguo [1 ]
机构
[1] Commun Univ Zhejiang, 998, Xueyuan Rd, Hangzhou 310042, Zhejiang, Peoples R China
[2] Peng Cheng Lab, 2 Xingke Rd, Shenzhen 518055, Guangdong, Peoples R China
关键词
deep neural networks; image quality as-sessment; adversarial auto encoder; STRUCTURAL SIMILARITY;
D O I
10.23919/JCC.2023.00.023
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Blind image quality assessment (BIQA) is of fundamental importance in low-level computer vision community. Increasing interest has been drawn in exploiting deep neural networks for BIQA. Despite of the notable success achieved, there is a broad consensus that training deep convolutional neural networks (DCNN) heavily relies on massive annotated data. Unfortunately, BIQA is typically a small sample problem, resulting the generalization ability of BIQA severely restricted. In order to improve the accuracy and generalization ability of BIQA metrics, this work proposed a totally opinion-unaware BIQA in which no subjective annotations are involved in the training stage. Multiple full-reference image quality assessment (FR-IQA) metrics are employed to label the distorted image as a substitution of subjective quality annotation. A deep neural network (DNN) is trained to blindly predict the multiple FR-IQA score in absence of corresponding pristine image. In the end, a self supervised FR-IQA score aggregator implemented by adversarial auto-encoder pools the predictions of multiple FR-IQA scores into the final quality predicting score. Even though none of subjective scores are involved in the training stage, experimental results indicate that our proposed full reference induced BIQA framework is as competitive as state-of-the-art BIQA metrics.
引用
收藏
页码:215 / 228
页数:14
相关论文
共 38 条
[1]  
Bandi R, 2017, IEEE INT ADV COMPUT, P639, DOI [10.1109/IACC.2017.126, 10.1109/IACC.2017.0135]
[2]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[3]   Sparse Feature Fidelity for Perceptual Image Quality Assessment [J].
Chang, Hua-Wen ;
Yang, Hua ;
Gan, Yong ;
Wang, Ming-Hui .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (10) :4007-4018
[4]   Blind Image Quality Assessment Using Local Variant Patterns [J].
Freitas, Pedro Garcia ;
Akamine, Welington Y. L. ;
Farias, Mylene C. Q. .
2017 6TH BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 2017, :252-257
[5]   DeepSim: Deep similarity for image quality assessment [J].
Gao, Fei ;
Wang, Yi ;
Li, Panpeng ;
Tan, Min ;
Yu, Jun ;
Zhu, Yani .
NEUROCOMPUTING, 2017, 257 :104-114
[6]  
Kang L, 2015, IEEE IMAGE PROC, P2791, DOI 10.1109/ICIP.2015.7351311
[7]   Convolutional Neural Networks for No-Reference Image Quality Assessment [J].
Kang, Le ;
Ye, Peng ;
Li, Yi ;
Doermann, David .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :1733-1740
[8]  
Kim J, 2017, IEEE IMAGE PROC, P3180, DOI 10.1109/ICIP.2017.8296869
[9]   Fully Deep Blind Image Quality Predictor [J].
Kim, Jongyoo ;
Lee, Sanghoon .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2017, 11 (01) :206-220
[10]   Most apparent distortion: full-reference image quality assessment and the role of strategy [J].
Larson, Eric C. ;
Chandler, Damon M. .
JOURNAL OF ELECTRONIC IMAGING, 2010, 19 (01)