Multi-attention fusion transformer for single-image super-resolution

被引:5
作者
Li, Guanxing [1 ]
Cui, Zhaotong [1 ]
Li, Meng [1 ]
Han, Yu [1 ]
Li, Tianping [1 ]
机构
[1] Shandong Normal Univ, Sch Phys & Elect, Jinan, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
Super-resolution; Attention mechanism; Transformer; MAFT; Multi-attention fusion;
D O I
10.1038/s41598-024-60579-5
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recently, Transformer-based methods have gained prominence in image super-resolution (SR) tasks, addressing the challenge of long-range dependence through the incorporation of cross-layer connectivity and local attention mechanisms. However, the analysis of these networks using local attribution maps has revealed significant limitations in leveraging the spatial extent of input information. To unlock the inherent potential of Transformer in image SR, we propose the Multi-Attention Fusion Transformer (MAFT), a novel model designed to integrate multiple attention mechanisms with the objective of expanding the number and range of pixels activated during image reconstruction. This integration enhances the effective utilization of input information space. At the core of our model lies the Multi-attention Adaptive Integration Groups, which facilitate the transition from dense local attention to sparse global attention through the introduction of Local Attention Aggregation and Global Attention Aggregation blocks with alternating connections, effectively broadening the network's receptive field. The effectiveness of our proposed algorithm has been validated through comprehensive quantitative and qualitative evaluation experiments conducted on benchmark datasets. Compared to state-of-the-art methods (e.g. HAT), the proposed MAFT achieves 0.09 dB gains on Urban100 dataset for x 4 SR task while containing 32.55% and 38.01% fewer parameters and FLOPs, respectively.
引用
收藏
页数:19
相关论文
共 50 条
[31]   An adaptive regression based single-image super-resolution [J].
Hou, Mingzheng ;
Feng, Ziliang ;
Wang, Haobo ;
Shen, Zhiwei ;
Li, Sheng .
MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (20) :28231-28248
[32]   Multilevel and Multiscale Network for Single-Image Super-Resolution [J].
Yang, Yong ;
Zhang, Dongyang ;
Huang, Shuying ;
Wu, Jiajun .
IEEE SIGNAL PROCESSING LETTERS, 2019, 26 (12) :1877-1881
[33]   Infrared remote sensing image super-resolution network by integration of dense connection and multi-attention mechanism [J].
Xu, Xin-hao ;
Wang, Jun ;
Wang, Feng ;
Sun, Sheng-li .
JOURNAL OF INFRARED AND MILLIMETER WAVES, 2025, 44 (02) :265-276
[34]   An adaptive regression based single-image super-resolution [J].
Mingzheng Hou ;
Ziliang Feng ;
Haobo Wang ;
Zhiwei Shen ;
Sheng Li .
Multimedia Tools and Applications, 2022, 81 :28231-28248
[35]   Single-image super-resolution via local learning [J].
Tang, Yi ;
Yan, Pingkun ;
Yuan, Yuan ;
Li, Xuelong .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2011, 2 (01) :15-23
[36]   Memory-efficient single-image super-resolution [J].
Chiapputo, Nicholas ;
Bailey, Colleen P. .
BIG DATA IV: LEARNING, ANALYTICS, AND APPLICATIONS, 2022, 12097
[37]   Single-Image Super-Resolution Reconstruction Based on Improved Attention in A2N [J].
Cao, Hualiang ;
Wei, Zhuang .
LASER & OPTOELECTRONICS PROGRESS, 2025, 62 (02)
[38]   Single-image super-resolution via selective multi-scale network [J].
Zewei He ;
Binjie Ding ;
Guizhong Fu ;
Yanpeng Cao ;
Jiangxin Yang ;
Yanlong Cao .
Signal, Image and Video Processing, 2022, 16 :937-945
[39]   Triple-Attention Mixed-Link Network for Single-Image Super-Resolution [J].
Cheng, Xi ;
Li, Xiang ;
Yang, Jian .
APPLIED SCIENCES-BASEL, 2019, 9 (15)
[40]   Single-image super-resolution via selective multi-scale network [J].
He, Zewei ;
Ding, Binjie ;
Fu, Guizhong ;
Cao, Yanpeng ;
Yang, Jiangxin ;
Cao, Yanlong .
SIGNAL IMAGE AND VIDEO PROCESSING, 2022, 16 (04) :937-945