Towards Lightweight Super-Resolution With Dual Regression Learning

被引:1
|
作者
Guo, Yong [1 ]
Tan, Mingkui [1 ,2 ,3 ]
Deng, Zeshuai [1 ,4 ]
Wang, Jingdong [5 ]
Chen, Qi [6 ]
Cao, Jiezhang [7 ]
Xu, Yanwu [8 ]
Chen, Jian [1 ]
机构
[1] South China Univ Technol, Sch Software Engn, Guangzhou 510641, Peoples R China
[2] Pazhou Lab, Guangzhou 510335, Peoples R China
[3] South China Univ Technol, Minist Educ, Key Lab Big Data & Intelligent Robot, Guangzhou 510006, Peoples R China
[4] Pengcheng Lab, Shenzhen 518066, Peoples R China
[5] Baidu Inc, Beijing 100080, Peoples R China
[6] Univ Adelaide, Fac Engn, Adelaide, SA 5005, Australia
[7] Swiss Fed Inst Technol, CH-8092 Zurich, Switzerland
[8] South China Univ Technol, Sch Future Technol, Guangzhou 510641, Peoples R China
基金
中国国家自然科学基金;
关键词
Redundancy; Image reconstruction; Computational modeling; Image coding; Task analysis; Superresolution; Training; Image super-resolution; dual regression; closed-loop learning; lightweight models; IMAGE SUPERRESOLUTION; ACCURATE; NETWORK; LIMITS;
D O I
10.1109/TPAMI.2024.3406556
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have exhibited remarkable performance in image super-resolution (SR) tasks by learning a mapping from low-resolution (LR) images to high-resolution (HR) images. However, the SR problem is typically an ill-posed problem and existing methods would come with several limitations. First, the possible mapping space of SR can be extremely large since there may exist many different HR images that can be super-resolved from the same LR image. As a result, it is hard to directly learn a promising SR mapping from such a large space. Second, it is often inevitable to develop very large models with extremely high computational cost to yield promising SR performance. In practice, one can use model compression techniques to obtain compact models by reducing model redundancy. Nevertheless, it is hard for existing model compression methods to accurately identify the redundant components due to the extremely large SR mapping space. To alleviate the first challenge, we propose a dual regression learning scheme to reduce the space of possible SR mappings. Specifically, in addition to the mapping from LR to HR images, we learn an additional dual regression mapping to estimate the downsampling kernel and reconstruct LR images. In this way, the dual mapping acts as a constraint to reduce the space of possible mappings. To address the second challenge, we propose a dual regression compression (DRC) method to reduce model redundancy in both layer-level and channel-level based on channel pruning. Specifically, we first develop a channel number search method that minimizes the dual regression loss to determine the redundancy of each layer. Given the searched channel numbers, we further exploit the dual regression manner to evaluate the importance of channels and prune the redundant ones. Extensive experiments show the effectiveness of our method in obtaining accurate and efficient SR models.
引用
收藏
页码:8365 / 8379
页数:15
相关论文
共 50 条
  • [21] TCSR: Lightweight Transformer and CNN Interaction Network for Image Super-Resolution
    Cai, Danlin
    Tan, Wenwen
    Chen, Feiyang
    Lou, Xinchi
    Xiahou, Jianbin
    Zhu, Daxin
    Huang, Detian
    IEEE ACCESS, 2024, 12 : 174782 - 174795
  • [22] Propagating Facial Prior Knowledge for Multitask Learning in Face Super-Resolution
    Wang, Chenyang
    Jiang, Junjun
    Zhong, Zhiwei
    Liu, Xianming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (11) : 7317 - 7331
  • [23] MESR: Multistage Enhancement Network for Image Super-Resolution
    Huang, Detian
    Chen, Jian
    IEEE ACCESS, 2022, 10 : 54599 - 54612
  • [24] Steformer: Efficient Stereo Image Super-Resolution With Transformer
    Lin, Jianxin
    Yin, Lianying
    Wang, Yijun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8396 - 8407
  • [25] MRI Super-Resolution With Ensemble Learning and Complementary Priors
    Lyu, Qing
    Shan, Hongming
    Wang, Ge
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2020, 6 : 615 - 624
  • [26] Multitask Learning for Super-Resolution of Seismic Velocity Model
    Li, Yinshuo
    Song, Jianyong
    Lu, Wenkai
    Monkam, Patrice
    Ao, Yile
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (09): : 8022 - 8033
  • [27] Learning From Errors in Super-Resolution
    Tang, Yi
    Yuan, Yuan
    IEEE TRANSACTIONS ON CYBERNETICS, 2014, 44 (11) : 2143 - 2154
  • [28] Dual residual and large receptive field network for lightweight image super-resolution
    Pan, Lulu
    Li, Guo
    Xu, Ke
    Lv, Yanheng
    Zhang, Wenbo
    Li, Lingxiao
    Lei, Le
    NEUROCOMPUTING, 2024, 600
  • [29] Quasi-supervised learning for super-resolution PET
    Yang, Guangtong
    Li, Chen
    Yao, Yudong
    Wang, Ge
    Teng, Yueyang
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2024, 113
  • [30] VolumeNet: A Lightweight Parallel Network for Super-Resolution of MR and CT Volumetric Data
    Li, Yinhao
    Iwamoto, Yutaro
    Lin, Lanfen
    Xu, Rui
    Tong, Ruofeng
    Chen, Yen-Wei
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4840 - 4854