DDNSR: a dual-input degradation network for real-world super-resolution

被引:2
作者
Li, Yizhi [1 ]
Chen, Haixin [1 ]
Li, Tao [1 ]
Liu, Binbing [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Opt & Elect Informat, Wuhan 430074, Peoples R China
关键词
Real-world super-resolution; Degradation network; Self-supervised learning; Deep learning;
D O I
10.1007/s10044-023-01150-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, Real-World Super-Resolution has become one of the most popular research fields in the scope of Single Image Super-Resolution, as it focuses on real-world applications. Due to the lack of paired training data, developing real-world super-resolution is considered a more challenging problem. Previous works intended to model the real image degradation process so that paired training images could be obtained. Specifically, some methods attempt to explicitly estimate degradation kernels and noise patterns, while others introduce degradation networks to learn maps from high-resolutions (HRs) to low-resolutions (LRs), which is a more direct and practical way. However, previous degradation networks take only one HR image as an input and therefore can hardly learn the real sensor noise contained in LR samples. In this paper, we propose a novel dual-input degradation network that takes a real LR image as an additional input to better learn the real sensor noise. Furthermore, we propose an effective self-supervised learning method to synchronously train the degradation network along with the reconstruction network. Extensive experiments showed that our dual-input degradation network can better simulate the real degradation process, thereby indicating that the reconstruction network outperforms state-of-the-art methods. Original codes and most of the testing data can be found on our website.
引用
收藏
页码:875 / 888
页数:14
相关论文
共 24 条
[1]   NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study [J].
Agustsson, Eirikur ;
Timofte, Radu .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1122-1131
[2]  
Bell-Kligler S, 2019, ADV NEUR IN, V32
[3]   The 2018 PIRM Challenge on Perceptual Image Super-Resolution [J].
Blau, Yochai ;
Mechrez, Roey ;
Timofte, Radu ;
Michaeli, Tomer ;
Zelnik-Manor, Lihi .
COMPUTER VISION - ECCV 2018 WORKSHOPS, PT V, 2019, 11133 :334-355
[4]   To Learn Image Super-Resolution, Use a GAN to Learn How to Do Image Degradation First [J].
Bulat, Adrian ;
Yang, Jing ;
Tzimiropoulos, Georgios .
COMPUTER VISION - ECCV 2018, PT VI, 2018, 11210 :187-202
[5]   Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Model [J].
Cai, Jianrui ;
Zeng, Hui ;
Yong, Hongwei ;
Cao, Zisheng ;
Zhang, Lei .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3086-3095
[6]   Unsupervised Image Super-Resolution with an Indirect Supervised Path [J].
Chen, Shuaijun ;
Han, Zhen ;
Dai, Enyan ;
Jia, Xu ;
Liu, Ziluan ;
Liu, Xing ;
Zou, Xueyi ;
Xu, Chunjing ;
Liu, Jianzhuang ;
Tian, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :1924-1933
[7]   Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks [J].
Choi, Jun-Ho ;
Zhang, Huan ;
Kim, Jun-Hyuk ;
Hsieh, Cho-Jui ;
Lee, Jong-Seok .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :303-311
[8]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[9]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[10]   DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks [J].
Ignatov, Andrey ;
Kobyshev, Nikolay ;
Timofte, Radu ;
Vanhoey, Kenneth ;
Luc Van Gool .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3297-3305