Fast Single-Image Super-Resolution via Deep Network With Component Learning

被引:38
作者
Xie, Chao [1 ,2 ]
Zeng, Weili [3 ]
Lu, Xiaobo [1 ,2 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing 210096, Jiangsu, Peoples R China
[2] Southeast Univ, Key Lab Measurement & Control Complex Syst Engn, Minist Educ, Nanjing 210096, Jiangsu, Peoples R China
[3] Nanjing Univ Aeronaut & Astronaut, Coll Civil Aviat, Nanjing 210016, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Image reconstruction; Training; Image resolution; Convolutional codes; Encoding; Single image super-resolution; component learning; deep convolutional neural networks; SPARSE REPRESENTATION; RECONSTRUCTION; SIMILARITY; ALGORITHM; LIMITS;
D O I
10.1109/TCSVT.2018.2883771
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Driven by the spectacular success of deep learning, several advanced models based on neural networks have recently been proposed for single-image super-resolution, incrementally revealing their superiority over their alternatives. In this paper, we pursue this latest line of research and present an improved network structure by taking advantage of the proposed component learning. The core idea and difference of this learning strategy are to use the residual extracted from the input to predict its counterpart in the corresponding output. To this end, a global decomposition procedure is designed on the basis of convolutional sparse coding and performed on the input for extracting the low-resolution (LR) residual component from it. Owing to the properties of this decomposition, the represented residual component still stays in the LR space so that the subsequent part is capable of operating it economically in terms of computational complexity. Thorough experimental results demonstrate the merit and effectiveness of the proposed component learning strategy, and our trained model outperforms many state-of-the-art methods in terms of both speed and reconstruction quality.
引用
收藏
页码:3473 / 3486
页数:14
相关论文
共 50 条
  • [1] A Conspectus of Deep Learning Techniques for Single-Image Super-Resolution
    Pandey, Garima
    Ghanekar, Umesh
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2022, 32 (01) : 11 - 32
  • [2] Single-image super-resolution via local learning
    Tang, Yi
    Yan, Pingkun
    Yuan, Yuan
    Li, Xuelong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2011, 2 (01) : 15 - 23
  • [3] Single-image super-resolution via local learning
    Yi Tang
    Pingkun Yan
    Yuan Yuan
    Xuelong Li
    International Journal of Machine Learning and Cybernetics, 2011, 2 : 15 - 23
  • [4] Joint Learning for Single-Image Super-Resolution via a Coupled Constraint
    Gao, Xinbo
    Zhang, Kaibing
    Tao, Dacheng
    Li, Xuelong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (02) : 469 - 480
  • [5] A Conspectus of Deep Learning Techniques for Single-Image Super-Resolution
    Pattern Recognition and Image Analysis, 2022, 32 : 11 - 32
  • [6] MADNet: A Fast and Lightweight Network for Single-Image Super Resolution
    Lan, Rushi
    Sun, Long
    Liu, Zhenbing
    Lu, Huimin
    Pang, Cheng
    Luo, Xiaonan
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (03) : 1443 - 1453
  • [7] Deep Shearlet Residual Learning Network for Single Image Super-Resolution
    Geng, Tianyu
    Liu, Xiao-Yang
    Wang, Xiaodong
    Sun, Guiling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4129 - 4142
  • [8] Rectified Binary Network for Single-Image Super-Resolution
    Xin, Jingwei
    Wang, Nannan
    Jiang, Xinrui
    Li, Jie
    Wang, Xiaoyu
    Gao, Xinbo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [9] Fast On-Device Learning Framework for Single-Image Super-Resolution
    Lee, Seok Hee
    Park, Karam
    Cho, Sunwoo
    Lee, Hyun-Seung
    Choi, Kyuha
    Cho, Nam Ik
    IEEE ACCESS, 2024, 12 : 37276 - 37287
  • [10] FAST SINGLE-IMAGE SUPER-RESOLUTION WITH FILTER SELECTION
    Salvador, Jordi
    Perez-Pellitero, Eduardo
    Kochale, Axel
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 640 - 644