Towards Lightweight Super-Resolution With Dual Regression Learning

被引:1
|
作者
Guo, Yong [1 ]
Tan, Mingkui [1 ,2 ,3 ]
Deng, Zeshuai [1 ,4 ]
Wang, Jingdong [5 ]
Chen, Qi [6 ]
Cao, Jiezhang [7 ]
Xu, Yanwu [8 ]
Chen, Jian [1 ]
机构
[1] South China Univ Technol, Sch Software Engn, Guangzhou 510641, Peoples R China
[2] Pazhou Lab, Guangzhou 510335, Peoples R China
[3] South China Univ Technol, Minist Educ, Key Lab Big Data & Intelligent Robot, Guangzhou 510006, Peoples R China
[4] Pengcheng Lab, Shenzhen 518066, Peoples R China
[5] Baidu Inc, Beijing 100080, Peoples R China
[6] Univ Adelaide, Fac Engn, Adelaide, SA 5005, Australia
[7] Swiss Fed Inst Technol, CH-8092 Zurich, Switzerland
[8] South China Univ Technol, Sch Future Technol, Guangzhou 510641, Peoples R China
基金
中国国家自然科学基金;
关键词
Redundancy; Image reconstruction; Computational modeling; Image coding; Task analysis; Superresolution; Training; Image super-resolution; dual regression; closed-loop learning; lightweight models; IMAGE SUPERRESOLUTION; ACCURATE; NETWORK; LIMITS;
D O I
10.1109/TPAMI.2024.3406556
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have exhibited remarkable performance in image super-resolution (SR) tasks by learning a mapping from low-resolution (LR) images to high-resolution (HR) images. However, the SR problem is typically an ill-posed problem and existing methods would come with several limitations. First, the possible mapping space of SR can be extremely large since there may exist many different HR images that can be super-resolved from the same LR image. As a result, it is hard to directly learn a promising SR mapping from such a large space. Second, it is often inevitable to develop very large models with extremely high computational cost to yield promising SR performance. In practice, one can use model compression techniques to obtain compact models by reducing model redundancy. Nevertheless, it is hard for existing model compression methods to accurately identify the redundant components due to the extremely large SR mapping space. To alleviate the first challenge, we propose a dual regression learning scheme to reduce the space of possible SR mappings. Specifically, in addition to the mapping from LR to HR images, we learn an additional dual regression mapping to estimate the downsampling kernel and reconstruct LR images. In this way, the dual mapping acts as a constraint to reduce the space of possible mappings. To address the second challenge, we propose a dual regression compression (DRC) method to reduce model redundancy in both layer-level and channel-level based on channel pruning. Specifically, we first develop a channel number search method that minimizes the dual regression loss to determine the redundancy of each layer. Given the searched channel numbers, we further exploit the dual regression manner to evaluate the importance of channels and prune the redundant ones. Extensive experiments show the effectiveness of our method in obtaining accurate and efficient SR models.
引用
收藏
页码:8365 / 8379
页数:15
相关论文
共 50 条
  • [31] Learning a Deep Dual Attention Network for Video Super-Resolution
    Li, Feng
    Bai, Huihui
    Zhao, Yao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 (29) : 4474 - 4488
  • [32] SUPER-RESOLUTION BASED ON FAST LINEAR KERNEL REGRESSION
    Li, Jian-Min
    Qu, Yan-Yun
    Gu, Ying
    Fang, Tian-Zhu
    Li, Cui-Hua
    PROCEEDINGS OF 2013 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), VOLS 1-4, 2013, : 333 - 339
  • [33] Contrastive Learning for Blind Super-Resolution via A Distortion-Specific Network
    Wang, Xinya
    Ma, Jiayi
    Jiang, Junjun
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2023, 10 (01) : 78 - 89
  • [34] Lightweight image super-resolution with enhanced CNN
    Tian, Chunwei
    Zhuge, Ruibin
    Wu, Zhihao
    Xu, Yong
    Zuo, Wangmeng
    Chen, Chen
    Lin, Chia-Wen
    KNOWLEDGE-BASED SYSTEMS, 2020, 205
  • [35] Lightweight blueprint residual network for single image super-resolution
    Hao, Fangwei
    Wu, Jiesheng
    Liang, Weiyun
    Xu, Jing
    Li, Ping
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 250
  • [36] Lightweight Wavelet-Based Transformer for Image Super-Resolution
    Ran, Jinye
    Zhang, Zili
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2022, 13631 : 368 - 382
  • [37] Edge-Aware Attention Transformer for Image Super-Resolution
    Wang, Haoqian
    Xing, Zhongyang
    Xu, Zhongjie
    Cheng, Xiangai
    Li, Teng
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2905 - 2909
  • [38] Super-Resolution of Seismic Velocity Model Guided by Seismic Data
    Li, Yinshuo
    Song, Jianyong
    Lu, Wenkai
    Monkam, Patrice
    Ao, Yile
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [39] Efficient Swin Transformer for Remote Sensing Image Super-Resolution
    Kang, Xudong
    Duan, Puhong
    Li, Jier
    Li, Shutao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6367 - 6379
  • [40] Audio Matters in Video Super-Resolution by Implicit Semantic Guidance
    Chen, Yanxiang
    Zhao, Pengcheng
    Qi, Meibin
    Zhao, Yang
    Jia, Wei
    Wang, Ronggang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 4128 - 4142