MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment

被引:203
作者
Yang, Sidi [1 ]
Wu, Tianhe [1 ,2 ]
Shi, Shuwei [1 ]
Lao, Shanshan [1 ]
Gong, Yuan [1 ]
Cao, Mingdeng [1 ]
Wang, Jiahao [1 ]
Yang, Yujiu [1 ]
机构
[1] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[2] Tsinghua Univ, Shenzhen, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022 | 2022年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPRW56347.2022.00126
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception. Unfortunately, existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images. To this end, we propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion. We firstly extract features via ViT, then to strengthen global and local interactions, we propose the Transposed Attention Block (TAB) and the Scale Swin Transformer Block (SSTB). These two modules apply attention mechanisms across the channel and spatial dimension, respectively. In this multi-dimensional manner, the modules cooperatively increase the interaction among different regions of images globally and locally. Finally, a dual branch structure for patch-weighted quality prediction is applied to predict the final score depending on the weight of each patch's score. Experimental results demonstrate that MANIQA outperforms state-of-the-art methods on four standard datasets (LIVE, TID2013, CSIQ, and KADID-10K) by a large margin. Besides, our method ranked first place in the final testing phase of the NTIRE 2022 Perceptual Image Quality Assessment Challenge Track 2: No-Reference. Codes and models are available at https://github.com/IIGROUP/MANIQA.
引用
收藏
页码:1190 / 1199
页数:10
相关论文
共 61 条
[1]  
[Anonymous], 2020, IEEE T CIRC SYST VID, DOI DOI 10.1109/TCSVT.2018.2886771
[2]  
Blau Y., 2018, P CVPR
[3]  
Bosse Sebastian, 2017, IEEE Transactions on image processing
[4]  
Cao Mingdeng, 2022, ARXIV220408023
[5]  
Chiu Tai-Yin, 2020, P CVPR
[6]  
Dosovitskiy A, 2020, ARXIV
[7]  
Gao X, 2013, INTERNATIONAL CONFERENCE ON ELECTRICAL, CONTROL AND AUTOMATION ENGINEERING (ECAE 2013), P20
[8]   Perceptual quality prediction on authentically distorted images using a bag of features approach [J].
Ghadiyaram, Deepti ;
Bovik, Alan C. .
JOURNAL OF VISION, 2017, 17 (01)
[9]  
Golestaneh S., 2022, P IEEE CVF WINT C AP
[10]  
Goodfellow Ian, 2014, PROC NEURIPS, V1, P2