Joint channel-spatial attention network for super-resolution image quality assessment

被引:8
作者
Zhang, Tingyue [1 ]
Zhang, Kaibing [1 ]
Xiao, Chuan [2 ]
Xiong, Zenggang [3 ]
Lu, Jian [1 ]
机构
[1] Xian Polytech Univ, Sch Elect & Informat, Xian 710048, Peoples R China
[2] Yantai Nanshan Univ, Coll Engn, Yantai 265700, Peoples R China
[3] Hubei Engn Univ, Sch Comp & Informat Sci, Xiaogan 432000, Peoples R China
基金
中国国家自然科学基金;
关键词
Two-stream convolutional networks; Attention module; No-reference image quality assessment; Image super resolution;
D O I
10.1007/s10489-022-03338-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network model referred to as a joint channel-spatial attention network (JCSAN) for no-reference SR image quality assessment (NR-SRIQA). The JCSAN consists of a two-stream branch which learns the middle level features and the primary level features to jointly quantify the degradation of SR images. In the first middle level feature learning subnetwork, we embed a two-stage convolutional block attention module (CBAM) to capture discriminative perceptual feature maps through the channel and spatial attention in sequence. While the other shallow convolutional subnetwork is adopted to learn dense and primary level textural feature maps. In order to yield more accurate quality estimate to SR images, we integrate a unit aggregation gate (AG) module to dynamically distribute the channel-weights to the two feature maps from different branches. Extensive experimental results on two benchmark datasets verify the superiority of the proposed JCSAN-based quality metric in comparing with other state-of-the-art competitors.
引用
收藏
页码:17118 / 17132
页数:15
相关论文
共 42 条
[1]   NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study [J].
Agustsson, Eirikur ;
Timofte, Radu .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1122-1131
[2]  
Bare B, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P1223, DOI 10.1109/ICASSP.2018.8461931
[3]   The 2018 PIRM Challenge on Perceptual Image Super-Resolution [J].
Blau, Yochai ;
Mechrez, Roey ;
Timofte, Radu ;
Michaeli, Tomer ;
Zelnik-Manor, Lihi .
COMPUTER VISION - ECCV 2018 WORKSHOPS, PT V, 2019, 11133 :334-355
[4]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[5]   Blind visual quality assessment for image super-resolution by convolutional neural network [J].
Fang, Yuming ;
Zhang, Chi ;
Yang, Wenhan ;
Liu, Jiaying ;
Guo, Zongming .
MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) :29829-29846
[6]   Blind image quality prediction by exploiting multi-level deep representations [J].
Gao, Fei ;
Yu, Jun ;
Zhu, Suguo ;
Huang, Qingming ;
Han, Qi .
PATTERN RECOGNITION, 2018, 81 :432-442
[7]   DeepSim: Deep similarity for image quality assessment [J].
Gao, Fei ;
Wang, Yi ;
Li, Panpeng ;
Tan, Min ;
Yu, Jun ;
Zhu, Yani .
NEUROCOMPUTING, 2017, 257 :104-114
[8]  
Hui Q, 2019, CHIN CONT DECIS CONF, P2067, DOI [10.1109/CCDC.2019.8833247, 10.1109/ccdc.2019.8833247]
[9]   Saliency-based deep convolutional neural network for no-reference image quality assessment [J].
Jia, Sen ;
Zhang, Yang .
MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (12) :14859-14872
[10]   Screen content image quality assessment based on convolutional neural networks [J].
Jiang, Xuhao ;
Shen, Liquan ;
Ding, Qing ;
Zheng, Linru ;
An, Ping .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 67