A Residual Network with Efficient Transformer for Lightweight Image Super-Resolution

被引:0
|
作者
Yan, Fengqi [1 ]
Li, Shaokun [1 ]
Zhou, Zhiguo [1 ,2 ]
Shi, Yonggang [1 ]
机构
[1] Beijing Inst Technol, Sch Integrated Circuits & Elect, Beijing 100081, Peoples R China
[2] Beijing Inst Technol, Tangshan Res Inst, Tangshan 063000, Peoples R China
关键词
single-image super-resolution; blueprint-separable convolution; efficient transformer; spatial attention; channel attention;
D O I
10.3390/electronics13010194
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, deep learning approaches have achieved remarkable results in the field of Single-Image Super-Resolution (SISR). To attain improved performance, most existing methods focus on constructing more-complex networks that demand extensive computational resources, thereby significantly impeding the advancement and real-world application of super-resolution techniques. Furthermore, many lightweight super-resolution networks employ knowledge distillation strategies to reduce network parameters, which can considerably slow down inference speeds. In response to these challenges, we propose a Residual Network with an Efficient Transformer (RNET). RNET incorporates three effective design elements. First, we utilize Blueprint-Separable Convolution (BSConv) instead of traditional convolution, effectively reducing the computational workload. Second, we propose a residual connection structure for local feature extraction, streamlining feature aggregation and accelerating inference. Third, we introduce an efficient transformer module to enhance the network's ability to aggregate contextual features, resulting in recovered images with richer texture details. Additionally, spatial attention and channel attention mechanisms are integrated into our model, further augmenting its capabilities. We evaluate the proposed method on five general benchmark test sets. With these innovations, our network outperforms existing efficient SR methods on all test sets, achieving the best performance with the fewest parameters, particularly in the area of texture detail enhancement in images.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Lightweight Image Super-Resolution with ConvNeXt Residual Network
    Zhang, Yong
    Bai, Haomou
    Bing, Yaxing
    Liang, Xiao
    NEURAL PROCESSING LETTERS, 2023, 55 (07) : 9545 - 9561
  • [2] Lightweight Image Super-Resolution with ConvNeXt Residual Network
    Yong Zhang
    Haomou Bai
    Yaxing Bing
    Xiao Liang
    Neural Processing Letters, 2023, 55 : 9545 - 9561
  • [3] Lightweight image super-resolution with multiscale residual attention network
    Xiao, Cunjun
    Dong, Hui
    Li, Haibin
    Li, Yaqian
    Zhang, Wenming
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (04)
  • [4] Partial convolution residual network for lightweight image super-resolution
    Zhang, Long
    Wan, Yi
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (11) : 8019 - 8030
  • [5] Lightweight single image super-resolution with attentive residual refinement network
    Qin, Jinghui
    Zhang, Rumin
    NEUROCOMPUTING, 2022, 500 : 846 - 855
  • [6] LBCRN: lightweight bidirectional correction residual network for image super-resolution
    Shuying Huang
    Jichao Wang
    Yong Yang
    Weiguo Wan
    Guoqiang Li
    Multidimensional Systems and Signal Processing, 2023, 34 : 341 - 364
  • [7] LBCRN: lightweight bidirectional correction residual network for image super-resolution
    Huang, Shuying
    Wang, Jichao
    Yang, Yong
    Wan, Weiguo
    Li, Guoqiang
    MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING, 2023, 34 (01) : 341 - 364
  • [8] An efficient feature reuse distillation network for lightweight image super-resolution
    Liu, Chunying
    Gao, Guangwei
    Wu, Fei
    Guo, Zhenhua
    Yu, Yi
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 249
  • [9] A very lightweight image super-resolution network
    Bai, Haomou
    Liang, Xiao
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [10] Lightweight Dual-Stream Residual Network for Single Image Super-Resolution
    Jiang Y.
    Liu Y.
    Zhan W.
    Zhu D.
    IEEE Access, 2021, 9 : 129890 - 129901