MadFormer: multi-attention-driven image super-resolution method based on Transformer

被引:0
|
作者
Beibei Liu
Jing Sun
Bing Zhu
Ting Li
Fuming Sun
机构
[1] Dalian Minzu University,School of Information and Communication Engineering
[2] Harbin Institute of Technology,School of Electronic and Information Engineering
来源
Multimedia Systems | 2024年 / 30卷
关键词
Image super-resolution; Transformer; Multi-attention-driven; Dynamic fusion;
D O I
暂无
中图分类号
学科分类号
摘要
While the Transformer-based method has demonstrated exceptional performance in low-level visual processing tasks, it has a strong modeling ability only locally, thereby neglecting the importance of spatial feature information and high-frequency details within the channel for super-resolution. To enhance feature information and improve the visual experience, we propose a multi-attention-driven image super-resolution method based on a Transformer network, called MadFormer. Initially, the low-resolution image undergoes an initial convolution operation to extract shallow features while being fed into a residual multi-attention block incorporating channel attention, spatial attention, and self-attention mechanisms. By employing multi-head self-attention, the proposed method aims to capture global–local feature information; channel attention and spatial attention are utilized to effectively capture high-frequency features in both the channel and spatial domains. Subsequently, deep feature information is inputted into a dynamic fusion block that dynamically fuses multi-attention extracted features, facilitating the aggregation of cross-window information. Ultimately, the shallow and deep feature information is fused via convolution operations, yielding high-resolution images through high-quality reconstruction. Comprehensive quantitative and qualitative comparisons with other advanced algorithms demonstrate the substantial advantages of the proposed approach in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) for image super-resolution.
引用
收藏
相关论文
共 50 条
  • [1] MadFormer: multi-attention-driven image super-resolution method based on Transformer
    Liu, Beibei
    Sun, Jing
    Zhu, Bing
    Li, Ting
    Sun, Fuming
    MULTIMEDIA SYSTEMS, 2024, 30 (02)
  • [2] Efficient Dual Attention Transformer for Image Super-Resolution
    Park, Soobin
    Jeong, Yuna
    Choi, Yong Suk
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 963 - 970
  • [3] Reference-Based Image Super-Resolution with Deformable Attention Transformer
    Cao, Jiezhang
    Liang, Jingyun
    Zhang, Kai
    Li, Yawei
    Zhang, Yulun
    Wang, Wenguan
    Van Gool, Luc
    COMPUTER VISION - ECCV 2022, PT XVIII, 2022, 13678 : 325 - 342
  • [4] Efficient Multi-Scale Cosine Attention Transformer for Image Super-Resolution
    Chen, Yuzhen
    Wang, Gencheng
    Chen, Rong
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1442 - 1446
  • [5] Multi-attention fusion transformer for single-image super-resolution
    Li, Guanxing
    Cui, Zhaotong
    Li, Meng
    Han, Yu
    Li, Tianping
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [6] The Method of Industrial Internet Image Super-resolution Based on Transformer
    Liu, Lin
    Yu, Yingjie
    Wang, Juncheng
    Jin, Yi
    Zeng, Yuqiao
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 260 - 265
  • [7] Multi-Attention Multi-Image Super-Resolution Transformer (MAST) for Remote Sensing
    Li, Jiaao
    Lv, Qunbo
    Zhang, Wenjian
    Zhu, Baoyu
    Zhang, Guiyu
    Tan, Zheng
    REMOTE SENSING, 2023, 15 (17)
  • [8] A Novel Image Super-Resolution Method Based on Attention Mechanism
    Li, Da
    Wang, Yan
    Liu, Dong
    Li, Ruifang
    2020 4TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT 2020), 2020, 1518
  • [9] Edge-Aware Attention Transformer for Image Super-Resolution
    Wang, Haoqian
    Xing, Zhongyang
    Xu, Zhongjie
    Cheng, Xiangai
    Li, Teng
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2905 - 2909
  • [10] Parallel attention recursive generalization transformer for image super-resolution
    Jing Wang
    Yuanyuan Hao
    Hongxing Bai
    Lingyu Yan
    Scientific Reports, 15 (1)