Efficient Dual Attention Transformer for Image Super-Resolution

被引:0
|
作者
Park, Soobin [1 ]
Jeong, Yuna [1 ]
Choi, Yong Suk [1 ]
机构
[1] Hanyang Univ, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Image super-resolution; Low-level vision; Vision transformer; Self-attention; Computer vision;
D O I
10.1145/3605098.3635991
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Research based on computationally efficient local-window self-attention has been actively advancing in the field of image super-resolution (SR), leading to significant performance improvements. However, in most recent studies, local-window self-attention focuses only on spatial dimension, without sufficient consideration of the channel dimension. Additionally, extracting global information while maintaining the efficiency of local-window self-attention, still remains a challenging task in image SR. To resolve these problems, we propose a novel efficient dual attention transformer (EDAT). Our EDAT presents a dual attention block (DAB) that empowers the exploration of interdependencies not just among features residing at diverse spatial locations but also among distinct channels. Moreover, we propose a global attention block (GAB) to achieve efficient global feature extraction by reducing the spatial size of the keys and values. Our extensive experiments demonstrate that our DAB and GAB complement each other, exhibiting a synergistic effect. Furthermore, based on the two attention blocks, DAB and GAB, our EDAT achieves state-of-the-art results on five benchmark datasets.
引用
收藏
页码:963 / 970
页数:8
相关论文
共 50 条
  • [21] Dynamic dual attention iterative network for image super-resolution
    Hao Feng
    Liejun Wang
    Shuli Cheng
    Anyu Du
    Yongming Li
    Applied Intelligence, 2022, 52 : 8189 - 8208
  • [22] Semantically Guided Efficient Attention Transformer for Face Super-Resolution Tasks
    Han, Cong
    Gui, Youqiang
    Cheng, Peng
    You, Zhisheng
    INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS, 2025, 21 (01)
  • [23] Transformer for Single Image Super-Resolution
    Lu, Zhisheng
    Li, Juncheng
    Liu, Hong
    Huang, Chaoyan
    Zhang, Linlin
    Zeng, Tieyong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 456 - 465
  • [24] EFFICIENT HIERARCHICAL STRIPE ATTENTION FOR LIGHTWEIGHT IMAGE SUPER-RESOLUTION
    Chen, Xiaying
    Zhou, Yue
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 3770 - 3774
  • [25] Efficient residual attention network for single image super-resolution
    Fangwei Hao
    Taiping Zhang
    Linchang Zhao
    Yuanyan Tang
    Applied Intelligence, 2022, 52 : 652 - 661
  • [26] LOW REDUNDANT ATTENTION NETWORK FOR EFFICIENT IMAGE SUPER-RESOLUTION
    Liu, Yican
    Li, Jiacheng
    Zeng, Delu
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 2950 - 2954
  • [27] Efficient Global Attention Networks for Image Super-Resolution Reconstruction
    Wang Qingqing
    Xin Yuelan
    Zhao Jia
    Guo Jiang
    Wang Haochen
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (10)
  • [28] Efficient residual attention network for single image super-resolution
    Hao, Fangwei
    Zhang, Taiping
    Zhao, Linchang
    Tang, Yuanyan
    APPLIED INTELLIGENCE, 2022, 52 (01) : 652 - 661
  • [29] A Dual Transformer Super-Resolution Network for Improving the Definition of Vibration Image
    Zhu, Yang
    Wang, Sen
    Zhang, Yinhui
    He, Zifen
    Wang, Qingjian
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [30] Dual Attention with the Self-Attention Alignment for Efficient Video Super-resolution
    Yuezhong Chu
    Yunan Qiao
    Heng Liu
    Jungong Han
    Cognitive Computation, 2022, 14 : 1140 - 1151