Joint features-guided linear transformer and CNN for efficient image super-resolution

被引:2
|
作者
Wang, Bufan [1 ]
Zhang, Yongjun [1 ]
Long, Wei [1 ]
Cui, Zhongwei [2 ]
机构
[1] Guizhou Univ, Coll Comp Sci & Technol, State Key Lab Publ Big Data, Guiyang 550025, Guizhou, Peoples R China
[2] Guizhou Educ Univ, Sch Math & Big Data, Guiyang 550018, Peoples R China
关键词
Image super-resolution; Multi-level contextual information; Linear self-attention; Lightweight network; NETWORK;
D O I
10.1007/s13042-024-02277-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Integrating convolutional neural networks (CNNs) and transformers has notably improved lightweight single image super-resolution (SISR) tasks. However, existing methods lack the capability to exploit multi-level contextual information, and transformer computations inherently add quadratic complexity. To address these issues, we propose a Joint features-Guided Linear Transformer and CNN Network (JGLTN) for efficient SISR, which is constructed by cascading modules composed of CNN layers and linear transformer layers. Specifically, in the CNN layer, our approach employs an inter-scale feature integration module (IFIM) to extract critical latent information across scales. Then, in the linear transformer layer, we design a joint feature-guided linear attention (JGLA). It jointly considers adjacent and extended regional features, dynamically assigning weights to convolutional kernels for contextual feature selection. This process garners multi-level contextual information, which is used to guide linear attention for effective information interaction. Moreover, we redesign the method of computing feature similarity within the self-attention, reducing its computational complexity to linear. Extensive experiments shows that our proposal outperforms state-of-the-art models while balancing performance and computational costs.
引用
收藏
页码:5765 / 5780
页数:16
相关论文
共 50 条
  • [31] Densely Connected Transformer With Linear Self-Attention for Lightweight Image Super-Resolution
    Zeng, Kun
    Lin, Hanjiang
    Yan, Zhiqiang
    Fang, Jinsheng
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [32] CVIformer: Cross-View Interactive Transformer for Efficient Stereoscopic Image Super-Resolution
    Zhang, Dongyang
    Liang, Shuang
    He, Tao
    Shao, Jie
    Qin, Ke
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, : 1107 - 1118
  • [33] Image super-resolution reconstruction using Swin Transformer with efficient channel attention networks
    Sun, Zhenxi
    Zhang, Jin
    Chen, Ziyi
    Hong, Lu
    Zhang, Rui
    Li, Weishi
    Xia, Haojie
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 136
  • [34] Enhancement of guided thermal image super-resolution approaches
    Suarez, Patricia L.
    Carpio, Dario
    Sappa, Angel D.
    NEUROCOMPUTING, 2024, 573
  • [35] Attention-guided multi-path cross-CNN for underwater image super-resolution
    Zhang, Yan
    Yang, Shangxue
    Sun, Yemei
    Liu, Shudong
    Li, Xianguo
    SIGNAL IMAGE AND VIDEO PROCESSING, 2022, 16 (01) : 155 - 163
  • [36] Edge-Aware Attention Transformer for Image Super-Resolution
    Wang, Haoqian
    Xing, Zhongyang
    Xu, Zhongjie
    Cheng, Xiangai
    Li, Teng
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2905 - 2909
  • [37] Transformer-based image super-resolution and its lightweight
    Zhang, Dongxiao
    Qi, Tangyao
    Gao, Juhao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (26) : 68625 - 68649
  • [38] LKFormer: large kernel transformer for infrared image super-resolution
    Qin, Feiwei
    Yan, Kang
    Wang, Changmiao
    Ge, Ruiquan
    Peng, Yong
    Zhang, Kai
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (28) : 72063 - 72077
  • [39] Batch-transformer for scene text image super-resolution
    Sun, Yaqi
    Xie, Xiaolan
    Li, Zhi
    Yang, Kai
    VISUAL COMPUTER, 2024, 40 (10) : 7399 - 7409
  • [40] Image super-resolution using dilated neighborhood attention transformer
    Chen, Li
    Zuo, Jinnian
    Du, Kai
    Zou, Jinsong
    Yin, Shaoyun
    Wang, Jinyu
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)