Continual Learning for Blind Image Quality Assessment

被引:67
作者
Zhang, Weixia [1 ]
Li, Dingquan [2 ]
Ma, Chao [1 ]
Zhai, Guangtao [1 ]
Yang, Xiaokang [1 ]
Ma, Kede [3 ]
机构
[1] Shanghai Jiao Tong Univ, AI Inst, Key Lab Artificial Intelligence, MoE, Shanghai 200240, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Guangdong, Peoples R China
[3] City Univ Hong Kong, Dept Comp Sci, Kowloon, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Blind image quality assessment; continual learning; subpopulation shift; STATISTICS; BLUR;
D O I
10.1109/TPAMI.2022.3178874
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The explosive growth of image data facilitates the fast development of image processing and computer vision methods for emerging visual applications, meanwhile introducing novel distortions to processed images. This poses a grand challenge to existing blind image quality assessment (BIQA) models, which are weak at adapting to subpopulation shift. Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets. However, this type of approach is not scalable to a large number of datasets and is cumbersome to incorporate a newly created dataset as well. In this paper, we formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets, building on what was learned from previously seen data. We first identify five desiderata in the continual setting with three criteria to quantify the prediction accuracy, plasticity, and stability, respectively. We then propose a simple yet effective continual learning method for BIQA. Specifically, based on a shared backbone network, we add a prediction head for a new dataset and enforce a regularizer to allow all prediction heads to evolve with new data while being resistant to catastrophic forgetting of old data. We compute the overall quality score by a weighted summation of predictions from all heads. Extensive experiments demonstrate the promise of the proposed continual learning method in comparison to standard training techniques for BIQA, with and without experience replay. We made the code publicly available at https://github.com/zwx8981/BIQA_CL.
引用
收藏
页码:2864 / 2878
页数:15
相关论文
共 86 条
[1]  
Rusu AA, 2016, Arxiv, DOI [arXiv:1606.04671, DOI 10.43550/ARXIV:1606.04671, DOI 10.48550/ARXIV.1606.04671]
[2]  
Aljundi R., 2019, PROC ADV NEURAL INFO, p11 849
[3]  
Aljundi R, 2019, Arxiv, DOI arXiv:1910.02718
[4]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[5]   Expert Gate: Lifelong Learning with a Network of Experts [J].
Aljundi, Rahaf ;
Chakravarty, Punarjay ;
Tuytelaars, Tinne .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :7120-7129
[6]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[7]   AUTO-ASSOCIATION BY MULTILAYER PERCEPTRONS AND SINGULAR VALUE DECOMPOSITION [J].
BOURLARD, H ;
KAMP, Y .
BIOLOGICAL CYBERNETICS, 1988, 59 (4-5) :291-294
[8]  
Buzzega Pietro, 2020, Advances in neural information processing systems, V33, P15920
[9]  
Chaudhry Arslan, 2019, P INT C LEARN REPR
[10]   Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging [J].
Choi, Lark Kwon ;
You, Jaehee ;
Bovik, Alan Conrad .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) :3888-3901