Image Quality Assessment Using Contrastive Learning

被引:142
作者
Madhusudana, Pavan C. [1 ]
Birkbeck, Neil [2 ]
Wang, Yilin [2 ]
Adsumilli, Balu [2 ]
Bovik, Alan C. [1 ]
机构
[1] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[2] Google Inc, Mountain View, CA 94043 USA
基金
美国国家科学基金会;
关键词
Distortion; Task analysis; Image quality; Predictive models; Training; Convolutional neural networks; Computational modeling; No reference image quality assessment; blind image quality assessment; self-supervised learning; deep learning;
D O I
10.1109/TIP.2022.3181496
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of obtaining image quality representations in a self-supervised manner. We use prediction of distortion type and degree as an auxiliary task to learn features from an unlabeled image dataset containing a mixture of synthetic and realistic distortions. We then train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem. We refer to the proposed training framework and resulting deep IQA model as the CONTRastive Image QUality Evaluator (CONTRIQUE). During evaluation, the CNN weights are frozen and a linear regressor maps the learned representations to quality scores in a No-Reference (NR) setting. We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models, even without any additional fine-tuning of the CNN backbone. The learned representations are highly robust and generalize well across images afflicted by either synthetic or authentic distortions. Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets. The implementations used in this paper are available at https://github.com/pavancm/CONTRIQUE.
引用
收藏
页码:4149 / 4161
页数:13
相关论文
共 77 条
[1]  
[Anonymous], 2010, International journal of computer vision, DOI DOI 10.1007/s11263-009-0275-4
[2]  
[Anonymous], 2000, Final Report From the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment
[3]  
Bachman P, 2019, ADV NEUR IN, V32
[4]   Spatiotemporal Feature Integration and Model Fusion for Full Reference Video Quality Assessment [J].
Bampis, Christos G. ;
Li, Zhi ;
Bovik, Alan C. .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (08) :2256-2270
[5]   SpEED-QA: Spatial Efficient Entropic Differencing for Image and Video Quality [J].
Bampis, Christos G. ;
Gupta, Praful ;
Soundararajan, Rajiv ;
Bovik, Alan C. .
IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (09) :1333-1337
[6]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[7]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[8]   Pre-Trained Image Processing Transformer [J].
Chen, Hanting ;
Wang, Yunhe ;
Guo, Tianyu ;
Xu, Chang ;
Deng, Yiping ;
Liu, Zhenhua ;
Ma, Siwei ;
Xu, Chunjing ;
Xu, Chao ;
Gao, Wen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :12294-12305
[9]  
Chen T, 2020, PR MACH LEARN RES, V119
[10]   AutoAugment: Learning Augmentation Strategies from Data [J].
Cubuk, Ekin D. ;
Zoph, Barret ;
Mane, Dandelion ;
Vasudevan, Vijay ;
Le, Quoc V. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :113-123