Residual trio feature network for efficient super-resolution

被引:0
|
作者
Chen, Junfeng [1 ]
Mao, Mao [2 ]
Guan, Azhu [2 ]
Ayush, Altangerel [3 ]
机构
[1] Hohai Univ, Coll Artificial Intelligence & Automat, Changzhou 213200, Peoples R China
[2] Hohai Univ, Coll Informat Sci & Engn, Changzhou 213200, Peoples R China
[3] Mongolian Univ Sci & Technol, Sch ICT, Ulaanbaatar 13341, Mongolia
关键词
Image inpainting; Image super-resolution; Re-parameterization;
D O I
10.1007/s40747-024-01624-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning-based approaches have demonstrated impressive performance in single-image super-resolution (SISR). Efficient super-resolution compromises the reconstructed image's quality to have fewer parameters and Flops. Ensured efficiency in image reconstruction and improved reconstruction quality of the model are significant challenges. This paper proposes a trio branch module (TBM) based on structural reparameterization. TBM achieves equivalence transformation through structural reparameterization operations, which use a complex network structure in the training phase and convert it to a more lightweight structure in the inference, achieving efficient inference while maintaining accuracy. Based on the TBM, we further design a lightweight version of the enhanced spatial attention mini (ESA-mini) and the residual trio feature block (RTFB). Moreover, the multiple RTFBs are combined to construct the residual trio network (RTFN). Finally, we introduce a localized contrast loss for better applicability to the super-resolution task, which enhances the reconstruction quality of the super-resolution model. Experiments show that the RTFN framework proposed in this paper outperforms other state-of-the-art efficient super-resolution methods in terms of inference speed and reconstruction quality.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Residual Local Feature Network for Efficient Super-Resolution
    Kong, Fangyuan
    Li, Mingxi
    Liu, Songwei
    Liu, Ding
    He, Jingwen
    Bai, Yang
    Chen, Fangmin
    Fu, Lean
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 765 - 775
  • [2] Residual Feature Aggregation Network for Image Super-Resolution
    Liu, Jie
    Zhang, Wenjie
    Tang, Yuting
    Tang, Jie
    Wu, Gangshan
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2356 - 2365
  • [3] Conditional Convolution Residual Network for Efficient Super-Resolution
    Guo, Yunsheng
    Huang, Jinyang
    Zhang, Xiang
    Sun, Xiao
    Gu, Yu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 86 - 97
  • [4] Contextual Feature Modulation Network for Efficient Super-Resolution
    Zhang, Wandi
    Shen, Hao
    Zhang, Biao
    Tian, Weidong
    Zhao, Zhong-Qiu
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 15 - 26
  • [5] An efficient and lightweight image super-resolution with feature network
    Zang, Yongsheng
    Zhou, Dongming
    Wang, Changcheng
    Nie, Rencan
    Guo, Yanbu
    OPTIK, 2022, 255
  • [6] Lightweight image super-resolution with feature enhancement residual network
    Hui, Zheng
    Gao, Xinbo
    Wang, Xiumei
    NEUROCOMPUTING, 2020, 404 : 50 - 60
  • [7] Efficient residual attention network for single image super-resolution
    Fangwei Hao
    Taiping Zhang
    Linchang Zhao
    Yuanyan Tang
    Applied Intelligence, 2022, 52 : 652 - 661
  • [8] Efficient residual attention network for single image super-resolution
    Hao, Fangwei
    Zhang, Taiping
    Zhao, Linchang
    Tang, Yuanyan
    APPLIED INTELLIGENCE, 2022, 52 (01) : 652 - 661
  • [9] An Efficient Super-Resolution Network Based on Aggregated Residual Transformations
    Liu, Yan
    Zhang, Guangrui
    Wang, Hai
    Zhao, Wei
    Zhang, Min
    Qin, Hongbo
    ELECTRONICS, 2019, 8 (03)
  • [10] A Residual Network with Efficient Transformer for Lightweight Image Super-Resolution
    Yan, Fengqi
    Li, Shaokun
    Zhou, Zhiguo
    Shi, Yonggang
    ELECTRONICS, 2024, 13 (01)