Towards Lightweight Super-Resolution With Dual Regression Learning

被引:1
|
作者
Guo, Yong [1 ]
Tan, Mingkui [1 ,2 ,3 ]
Deng, Zeshuai [1 ,4 ]
Wang, Jingdong [5 ]
Chen, Qi [6 ]
Cao, Jiezhang [7 ]
Xu, Yanwu [8 ]
Chen, Jian [1 ]
机构
[1] South China Univ Technol, Sch Software Engn, Guangzhou 510641, Peoples R China
[2] Pazhou Lab, Guangzhou 510335, Peoples R China
[3] South China Univ Technol, Minist Educ, Key Lab Big Data & Intelligent Robot, Guangzhou 510006, Peoples R China
[4] Pengcheng Lab, Shenzhen 518066, Peoples R China
[5] Baidu Inc, Beijing 100080, Peoples R China
[6] Univ Adelaide, Fac Engn, Adelaide, SA 5005, Australia
[7] Swiss Fed Inst Technol, CH-8092 Zurich, Switzerland
[8] South China Univ Technol, Sch Future Technol, Guangzhou 510641, Peoples R China
基金
中国国家自然科学基金;
关键词
Redundancy; Image reconstruction; Computational modeling; Image coding; Task analysis; Superresolution; Training; Image super-resolution; dual regression; closed-loop learning; lightweight models; IMAGE SUPERRESOLUTION; ACCURATE; NETWORK; LIMITS;
D O I
10.1109/TPAMI.2024.3406556
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have exhibited remarkable performance in image super-resolution (SR) tasks by learning a mapping from low-resolution (LR) images to high-resolution (HR) images. However, the SR problem is typically an ill-posed problem and existing methods would come with several limitations. First, the possible mapping space of SR can be extremely large since there may exist many different HR images that can be super-resolved from the same LR image. As a result, it is hard to directly learn a promising SR mapping from such a large space. Second, it is often inevitable to develop very large models with extremely high computational cost to yield promising SR performance. In practice, one can use model compression techniques to obtain compact models by reducing model redundancy. Nevertheless, it is hard for existing model compression methods to accurately identify the redundant components due to the extremely large SR mapping space. To alleviate the first challenge, we propose a dual regression learning scheme to reduce the space of possible SR mappings. Specifically, in addition to the mapping from LR to HR images, we learn an additional dual regression mapping to estimate the downsampling kernel and reconstruct LR images. In this way, the dual mapping acts as a constraint to reduce the space of possible mappings. To address the second challenge, we propose a dual regression compression (DRC) method to reduce model redundancy in both layer-level and channel-level based on channel pruning. Specifically, we first develop a channel number search method that minimizes the dual regression loss to determine the redundancy of each layer. Given the searched channel numbers, we further exploit the dual regression manner to evaluate the importance of channels and prune the redundant ones. Extensive experiments show the effectiveness of our method in obtaining accurate and efficient SR models.
引用
收藏
页码:8365 / 8379
页数:15
相关论文
共 50 条
  • [1] Lightweight Super-Resolution Using Deep Neural Learning
    Jiang, Zhuqing
    Zhu, Honghui
    Lu, Yue
    Ju, Guodong
    Men, Aidong
    IEEE TRANSACTIONS ON BROADCASTING, 2020, 66 (04) : 814 - 823
  • [2] s-LWSR: Super Lightweight Super-Resolution Network
    Li, Biao
    Wang, Bo
    Liu, Jiabin
    Qi, Zhiquan
    Shi, Yong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 8368 - 8380
  • [3] Lightweight Super-Resolution Model for Complete Model Copyright Protection
    Xie, Bingyi
    Xu, Honghui
    Joe, Yongjoon
    Seo, Daehee
    Cai, Zhipeng
    TSINGHUA SCIENCE AND TECHNOLOGY, 2024, 29 (04): : 1194 - 1205
  • [4] Differentiable Neural Architecture Search for Extremely Lightweight Image Super-Resolution
    Huang, Han
    Shen, Li
    He, Chaoyang
    Dong, Weisheng
    Liu, Wei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (06) : 2672 - 2682
  • [5] Lightweight Image Super-Resolution With Expectation-Maximization Attention Mechanism
    Zhu, Xiangyuan
    Guo, Kehua
    Ren, Sheng
    Hu, Bin
    Hu, Min
    Fang, Hui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (03) : 1273 - 1284
  • [6] Criteria Comparative Learning for Real-Scene Image Super-Resolution
    Shi, Yukai
    Li, Hao
    Zhang, Sen
    Yang, Zhijing
    Wang, Xiao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (12) : 8476 - 8485
  • [7] Learning lightweight super-resolution networks with weight pruning
    Jiang, Xinrui
    Wang, Nannan
    Xin, Jingwei
    Xia, Xiaobo
    Yang, Xi
    Gao, Xinbo
    NEURAL NETWORKS, 2021, 144 : 21 - 32
  • [8] Towards Evolutionary Super-Resolution
    Kawulok, Michal
    Benecki, Pawel
    Kostrzewa, Daniel
    Skonieczny, Lukasz
    APPLICATIONS OF EVOLUTIONARY COMPUTATION, EVOAPPLICATIONS 2018, 2018, 10784 : 480 - 496
  • [9] Dual Circle Contrastive Learning-Based Blind Image Super-Resolution
    Qiu, Yajun
    Zhu, Qiang
    Zhu, Shuyuan
    Zeng, Bing
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (03) : 1757 - 1771
  • [10] Contextual Transformation Network for Lightweight Remote-Sensing Image Super-Resolution
    Wang, Shunzhou
    Zhou, Tianfei
    Lu, Yao
    Di, Huijun
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60