Multi-attention fusion transformer for single-image super-resolution

被引:5
作者
Li, Guanxing [1 ]
Cui, Zhaotong [1 ]
Li, Meng [1 ]
Han, Yu [1 ]
Li, Tianping [1 ]
机构
[1] Shandong Normal Univ, Sch Phys & Elect, Jinan, Shandong, Peoples R China
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
基金
中国国家自然科学基金;
关键词
Super-resolution; Attention mechanism; Transformer; MAFT; Multi-attention fusion;
D O I
10.1038/s41598-024-60579-5
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recently, Transformer-based methods have gained prominence in image super-resolution (SR) tasks, addressing the challenge of long-range dependence through the incorporation of cross-layer connectivity and local attention mechanisms. However, the analysis of these networks using local attribution maps has revealed significant limitations in leveraging the spatial extent of input information. To unlock the inherent potential of Transformer in image SR, we propose the Multi-Attention Fusion Transformer (MAFT), a novel model designed to integrate multiple attention mechanisms with the objective of expanding the number and range of pixels activated during image reconstruction. This integration enhances the effective utilization of input information space. At the core of our model lies the Multi-attention Adaptive Integration Groups, which facilitate the transition from dense local attention to sparse global attention through the introduction of Local Attention Aggregation and Global Attention Aggregation blocks with alternating connections, effectively broadening the network's receptive field. The effectiveness of our proposed algorithm has been validated through comprehensive quantitative and qualitative evaluation experiments conducted on benchmark datasets. Compared to state-of-the-art methods (e.g. HAT), the proposed MAFT achieves 0.09 dB gains on Urban100 dataset for x 4 SR task while containing 32.55% and 38.01% fewer parameters and FLOPs, respectively.
引用
收藏
页数:19
相关论文
共 50 条
[21]   HAAT: Hybrid Attention Aggregation Transformer for Image Super-Resolution [J].
Lai, Song-Jiang ;
Cheung, Tsun-Hin ;
Fung, Ka-Chun ;
Xue, Kai-Wen ;
Lam, Kin-Man .
INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY, IWAIT 2025, 2025, 13510
[22]   Edge-Aware Attention Transformer for Image Super-Resolution [J].
Wang, Haoqian ;
Xing, Zhongyang ;
Xu, Zhongjie ;
Cheng, Xiangai ;
Li, Teng .
IEEE SIGNAL PROCESSING LETTERS, 2024, 31 :2905-2909
[23]   Parallel attention recursive generalization transformer for image super-resolution [J].
Wang, Jing ;
Hao, Yuanyuan ;
Bai, Hongxing ;
Yan, Lingyu .
SCIENTIFIC REPORTS, 2025, 15 (01)
[24]   Multi-Attention Residual Network for Image Super Resolution [J].
Chang, Qing ;
Jia, Xiaotian ;
Lu, Chenhao ;
Ye, Jian .
INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (08)
[25]   Efficient mixed transformer for single image super-resolution [J].
Zheng, Ling ;
Zhu, Jinchen ;
Shi, Jinpeng ;
Weng, Shizhuang .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
[26]   DHTCUN: Deep Hybrid Transformer CNN U Network for Single-Image Super-Resolution [J].
Talreja, Jagrati ;
Aramvith, Supavadee ;
Onoye, Takao .
IEEE ACCESS, 2024, 12 :122624-122641
[27]   Single-image super-resolution via local learning [J].
Yi Tang ;
Pingkun Yan ;
Yuan Yuan ;
Xuelong Li .
International Journal of Machine Learning and Cybernetics, 2011, 2 :15-23
[28]   Rectified Binary Network for Single-Image Super-Resolution [J].
Xin, Jingwei ;
Wang, Nannan ;
Jiang, Xinrui ;
Li, Jie ;
Wang, Xiaoyu ;
Gao, Xinbo .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (05) :9341-9355
[29]   An adaptive regression based single-image super-resolution [J].
Hou, Mingzheng ;
Feng, Ziliang ;
Wang, Haobo ;
Shen, Zhiwei ;
Li, Sheng .
MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (20) :28231-28248
[30]   Collaborative Representation Cascade for Single-Image Super-Resolution [J].
Zhang, Yongbing ;
Zhang, Yulun ;
Zhang, Jian ;
Xu, Dong ;
Fu, Yun ;
Wang, Yisen ;
Ji, Xiangyang ;
Dai, Qionghai .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2019, 49 (05) :845-860