Efficient image super-resolution based on transformer with bidirectional interaction

被引:1
|
作者
Gendy, Garas [1 ]
He, Guanghui [1 ]
Sabor, Nabil [2 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Micronano Elect, Shanghai 200240, Peoples R China
[2] Assiut Univ, Fac Engn, Elect Engn Dept, Assiut 71516, Egypt
基金
中国国家自然科学基金;
关键词
Image super-resolution; Transformer models; Bidirectional interaction; Fully adaptive self-attention block; Fully adaptive transformer;
D O I
10.1016/j.asoc.2024.112039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In single-image super-resolution (SISR) tasks, many methods benefit from the local and global contexts of the image. Despite that, no methods use the bidirectional interaction between these two contexts. So, we were inspired by the fully adaptive Transformer for high-level vision. We propose a fully adaptive Transformer super resolution (FATSRN) for SISR. The model uses local and global information and their bidirectional interaction in a context-aware manner. The model is based on fully adaptive self-attention (FASA) as the main block, which uses self-modulated convolutions to extract local representation adaptively. Also, the FASA uses self attention in down-sampled space to extract global representation. In addition, this FASA uses a bidirectional adaptation process between local and global representation to model the interaction. Moreover, a fine-grained downsampling strategy is used to improve the down-sampled self-attention mechanism. Based on the FASA, we built a fully adaptive self-attention block (FASAB) as the main block of our model. Then, the fully adaptive self-attention group (FASAG) is used as the backbone for our FATSRN. Extensive experiments are done to show the efficiency of the model against the state-of-the-art methods. For example, our model improved the PSNR from 27.69 to 27.73 compared to the SwinIR-light for the B100 dataset at the scale of x 4. In addition, our model achieved 0.04 dB better PSNR compared to the state-of-the-art STSN model for the Set5 dataset at the scale of x 2 with 64% and 48% fewer parameters and Mult-adds.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Transformer-Based Selective Super-resolution for Efficient Image Refinement
    Zhang, Tianyi
    Kasichainula, Kishore
    Zhuo, Yaoxin
    Li, Baoxin
    Seo, Jae-Sun
    Cao, Yu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 7305 - 7313
  • [2] Steformer: Efficient Stereo Image Super-Resolution With Transformer
    Lin, Jianxin
    Yin, Lianying
    Wang, Yijun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8396 - 8407
  • [3] Efficient mixed transformer for single image super-resolution
    Zheng, Ling
    Zhu, Jinchen
    Shi, Jinpeng
    Weng, Shizhuang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [4] ESSAformer: Efficient Transformer for Hyperspectral Image Super-resolution
    Zhang, Mingjin
    Zhang, Chi
    Zhang, Qiming
    Guo, Jie
    Gao, Xinbo
    Zhang, Jing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 23016 - 23027
  • [5] Efficient Dual Attention Transformer for Image Super-Resolution
    Park, Soobin
    Jeong, Yuna
    Choi, Yong Suk
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 963 - 970
  • [6] LCFormer: linear complexity transformer for efficient image super-resolution
    Gao, Xiang
    Wu, Sining
    Zhou, Ying
    Wang, Fan
    Hu, Xiaopeng
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [7] Efficient Swin Transformer for Remote Sensing Image Super-Resolution
    Kang, Xudong
    Duan, Puhong
    Li, Jier
    Li, Shutao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6367 - 6379
  • [8] A Residual Network with Efficient Transformer for Lightweight Image Super-Resolution
    Yan, Fengqi
    Li, Shaokun
    Zhou, Zhiguo
    Shi, Yonggang
    ELECTRONICS, 2024, 13 (01)
  • [9] Transformer for Single Image Super-Resolution
    Lu, Zhisheng
    Li, Juncheng
    Liu, Hong
    Huang, Chaoyan
    Zhang, Linlin
    Zeng, Tieyong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 456 - 465
  • [10] HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution
    Zhang, Xiang
    Zhang, Yulun
    Yu, Fisher
    COMPUTER VISION - ECCV 2024, PT XL, 2025, 15098 : 483 - 500