Efficient image super-resolution based on transformer with bidirectional interaction

被引:1
|
作者
Gendy, Garas [1 ]
He, Guanghui [1 ]
Sabor, Nabil [2 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Micronano Elect, Shanghai 200240, Peoples R China
[2] Assiut Univ, Fac Engn, Elect Engn Dept, Assiut 71516, Egypt
基金
中国国家自然科学基金;
关键词
Image super-resolution; Transformer models; Bidirectional interaction; Fully adaptive self-attention block; Fully adaptive transformer;
D O I
10.1016/j.asoc.2024.112039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In single-image super-resolution (SISR) tasks, many methods benefit from the local and global contexts of the image. Despite that, no methods use the bidirectional interaction between these two contexts. So, we were inspired by the fully adaptive Transformer for high-level vision. We propose a fully adaptive Transformer super resolution (FATSRN) for SISR. The model uses local and global information and their bidirectional interaction in a context-aware manner. The model is based on fully adaptive self-attention (FASA) as the main block, which uses self-modulated convolutions to extract local representation adaptively. Also, the FASA uses self attention in down-sampled space to extract global representation. In addition, this FASA uses a bidirectional adaptation process between local and global representation to model the interaction. Moreover, a fine-grained downsampling strategy is used to improve the down-sampled self-attention mechanism. Based on the FASA, we built a fully adaptive self-attention block (FASAB) as the main block of our model. Then, the fully adaptive self-attention group (FASAG) is used as the backbone for our FATSRN. Extensive experiments are done to show the efficiency of the model against the state-of-the-art methods. For example, our model improved the PSNR from 27.69 to 27.73 compared to the SwinIR-light for the B100 dataset at the scale of x 4. In addition, our model achieved 0.04 dB better PSNR compared to the state-of-the-art STSN model for the Set5 dataset at the scale of x 2 with 64% and 48% fewer parameters and Mult-adds.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Efficient Dual Attention Transformer for Image Super-Resolution
    Park, Soobin
    Jeong, Yuna
    Choi, Yong Suk
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 963 - 970
  • [2] Steformer: Efficient Stereo Image Super-Resolution With Transformer
    Lin, Jianxin
    Yin, Lianying
    Wang, Yijun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8396 - 8407
  • [3] Efficient Swin Transformer for Remote Sensing Image Super-Resolution
    Kang, Xudong
    Duan, Puhong
    Li, Jier
    Li, Shutao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6367 - 6379
  • [4] Lightweight Wavelet-Based Transformer for Image Super-Resolution
    Ran, Jinye
    Zhang, Zili
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2022, 13631 : 368 - 382
  • [5] Efficient image super-resolution integration
    Xu, Ke
    Wang, Xin
    Yang, Xin
    He, Shengfeng
    Zhang, Qiang
    Yin, Baocai
    Wei, Xiaopeng
    Lau, Rynson W. H.
    VISUAL COMPUTER, 2018, 34 (6-8) : 1065 - 1076
  • [6] Efficient Blind Image Super-Resolution
    Vais, Olga
    Makarov, Ilya
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II, 2023, 14135 : 229 - 240
  • [7] Efficient image super-resolution integration
    Ke Xu
    Xin Wang
    Xin Yang
    Shengfeng He
    Qiang Zhang
    Baocai Yin
    Xiaopeng Wei
    Rynson W. H. Lau
    The Visual Computer, 2018, 34 : 1065 - 1076
  • [8] Image Super-Resolution via Efficient Transformer Embedding Frequency Decomposition With Restart
    Zuo, Yifan
    Yao, Wenhao
    Hu, Yuqi
    Fang, Yuming
    Liu, Wei
    Peng, Yuxin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 4670 - 4685
  • [9] Dual path features interaction network for efficient image super-resolution
    Yang, Huimin
    Xiao, Jingzhong
    Zhang, Ji
    Tian, Yu
    Zhou, Xuchuan
    NEUROCOMPUTING, 2024, 601
  • [10] Enhancing Image Super-Resolution with Dual Compression Transformer
    Yu, Jiaxing
    Chen, Zheng
    Wang, Jingkai
    Kong, Linghe
    Yan, Jiajie
    Gu, Wei
    VISUAL COMPUTER, 2025, 41 (07) : 4879 - 4892