Blind image super-resolution based on prior correction network

被引:9
作者
Cao, Xiang [1 ]
Luo, Yihao [1 ]
Xiao, Yi [2 ]
Zhu, Xianyi [3 ]
Wang, Tianjiang [1 ]
Feng, Qi [1 ]
Tan, Zehan [4 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Hubei, Peoples R China
[2] Hunan Univ, Sch Design, Changsha 410082, Hunan, Peoples R China
[3] Xiangnan Univ, Coll Software & Commun Engn, Chenzhou 423000, Peoples R China
[4] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
关键词
Convolutional neural networks; Blind super-resolution; Blur kernel; Correction filter; ATTENTION NETWORK;
D O I
10.1016/j.neucom.2021.07.070
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) based super-resolution (SR) methods have achieved remarkable progress in recent years. Most of these methods assume that the degradation is known and fixed, such as bicubic downsampling. However, the performance of CNNs-based methods suffers from a severe drop when the actual degradation mismatch the training one. This paper proposes a prior correction network (PCNet) to solve the blind SR problem, which makes CNNs-based super-resolvers trained in a fixed blur kernel but applied to other unknown blur kernels. The PCNet consists of a kernel estimation network, a correction filter module, and a correction refinement network. The kernel estimation network aims to estimate unknown blur kernel from the input low-resolution (LR) image. The correction filter module then transfers the estimated degraded domains to adapt to specific degradations (e.g., bicubic downsampling). Finally, the correction refinement network adjusts the corrected LR image to eliminate the influence of blur kernel mismatch or misestimate. Experimental results on diverse datasets show that the proposed PCNet, combined with existing CNNs-based SR methods, outperforms other state-of-the-art algorithms for blind SR. Code is available at: \url{https://github.com/caoxiang104/PCNet}. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:525 / 534
页数:10
相关论文
共 48 条
[1]   Correction Filter for Single Image Super-Resolution: Robustifying Off-the-Shelf Deep Super-Resolvers [J].
Abu Hussein, Shady ;
Tirer, Tom ;
Giryes, Raja .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1425-1434
[2]   NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study [J].
Agustsson, Eirikur ;
Timofte, Radu .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1122-1131
[3]   Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network [J].
Ahn, Namhyuk ;
Kang, Byungkon ;
Sohn, Kyung-Ah .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :256-272
[4]   Contour Detection and Hierarchical Image Segmentation [J].
Arbelaez, Pablo ;
Maire, Michael ;
Fowlkes, Charless ;
Malik, Jitendra .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (05) :898-916
[5]   Finding Tiny Faces in the Wild with Generative Adversarial Network [J].
Bai, Yancheng ;
Zhang, Yongqiang ;
Ding, Mingli ;
Ghanem, Bernard .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :21-30
[6]  
Bell-Kligler S, 2019, ADV NEUR IN, V32
[7]   Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding [J].
Bevilacqua, Marco ;
Roumy, Aline ;
Guillemot, Christine ;
Morel, Marie-Line Alberi .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2012, 2012,
[8]   DAEANet: Dual auto-encoder attention network for depth map super-resolution [J].
Cao, Xiang ;
Luo, Yihao ;
Zhu, Xianyi ;
Zhang, Liangqi ;
Xu, Yan ;
Shen, Haibo ;
Wang, Tianjiang ;
Feng, Qi .
NEUROCOMPUTING, 2021, 454 :350-360
[9]   Super-resolution using multi-channel merged convolutional network [J].
Chu, Jinghui ;
Li, Xiaochuan ;
Zhang, Jiaqi ;
Lu, Wei .
NEUROCOMPUTING, 2020, 394 :136-145
[10]   Blind Image Super-Resolution with Spatially Variant Degradations [J].
Cornillere, Victor ;
Djelouah, Abdelaziz ;
Wang Yifan ;
Sorkine-Hornung, Olga ;
Schroers, Christopher .
ACM TRANSACTIONS ON GRAPHICS, 2019, 38 (06)