Efficient image super-resolution based on transformer with bidirectional interaction

被引:1
作者
Gendy, Garas [1 ]
He, Guanghui [1 ]
Sabor, Nabil [2 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Micronano Elect, Shanghai 200240, Peoples R China
[2] Assiut Univ, Fac Engn, Elect Engn Dept, Assiut 71516, Egypt
基金
中国国家自然科学基金;
关键词
Image super-resolution; Transformer models; Bidirectional interaction; Fully adaptive self-attention block; Fully adaptive transformer;
D O I
10.1016/j.asoc.2024.112039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In single-image super-resolution (SISR) tasks, many methods benefit from the local and global contexts of the image. Despite that, no methods use the bidirectional interaction between these two contexts. So, we were inspired by the fully adaptive Transformer for high-level vision. We propose a fully adaptive Transformer super resolution (FATSRN) for SISR. The model uses local and global information and their bidirectional interaction in a context-aware manner. The model is based on fully adaptive self-attention (FASA) as the main block, which uses self-modulated convolutions to extract local representation adaptively. Also, the FASA uses self attention in down-sampled space to extract global representation. In addition, this FASA uses a bidirectional adaptation process between local and global representation to model the interaction. Moreover, a fine-grained downsampling strategy is used to improve the down-sampled self-attention mechanism. Based on the FASA, we built a fully adaptive self-attention block (FASAB) as the main block of our model. Then, the fully adaptive self-attention group (FASAG) is used as the backbone for our FATSRN. Extensive experiments are done to show the efficiency of the model against the state-of-the-art methods. For example, our model improved the PSNR from 27.69 to 27.73 compared to the SwinIR-light for the B100 dataset at the scale of x 4. In addition, our model achieved 0.04 dB better PSNR compared to the state-of-the-art STSN model for the Set5 dataset at the scale of x 2 with 64% and 48% fewer parameters and Mult-adds.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Separable Modulation Network for Efficient Image Super-Resolution
    Wu, Zhijian
    Li, Jun
    Huang, Dingjiang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8086 - 8094
  • [22] A very lightweight and efficient image super-resolution network?
    Gao, Dandan
    Zhou, Dengwen
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [23] MadFormer: multi-attention-driven image super-resolution method based on Transformer
    Liu, Beibei
    Sun, Jing
    Zhu, Bing
    Li, Ting
    Sun, Fuming
    MULTIMEDIA SYSTEMS, 2024, 30 (02)
  • [24] Image super-resolution based on image adaptive decomposition
    Xie, Qiwei
    Wang, Haiyan
    Shen, Lijun
    Chen, Xi
    Han, Hua
    MIPPR 2011: PARALLEL PROCESSING OF IMAGES AND OPTIMIZATION AND MEDICAL IMAGING PROCESSING, 2011, 8005
  • [25] An efficient and lightweight image super-resolution with feature network
    Zang, Yongsheng
    Zhou, Dongming
    Wang, Changcheng
    Nie, Rencan
    Guo, Yanbu
    OPTIK, 2022, 255
  • [26] A Dual Transformer Super-Resolution Network for Improving the Definition of Vibration Image
    Zhu, Yang
    Wang, Sen
    Zhang, Yinhui
    He, Zifen
    Wang, Qingjian
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [27] A hybrid of transformer and CNN for efficient single image super-resolution via multi-level distillation
    Zhou, Zhenting
    Li, Guoping
    Wang, Guozhong
    DISPLAYS, 2023, 76
  • [28] Image Super-resolution Based on Compressive Sensing
    Gu, Ying
    Zhu, Xiuchang
    INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2011), 2011, 8285
  • [29] ENHANCING SPATIAL RESOLUTION OF BUILDING DATASETS USING TRANSFORMER-BASED SINGLE-IMAGE SUPER-RESOLUTION
    Cai, Yuwei
    He, Hongjie
    He, Zhimeng
    Chapman, Michael A.
    Li, Jing
    Ma, Lingfei
    Li, Jonathan
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6338 - 6341
  • [30] Efficient residual attention network for single image super-resolution
    Hao, Fangwei
    Zhang, Taiping
    Zhao, Linchang
    Tang, Yuanyan
    APPLIED INTELLIGENCE, 2022, 52 (01) : 652 - 661