共 50 条
EHAT:Enhanced Hybrid Attention Transformer for Remote Sensing Image Super-Resolution
被引:0
|作者:
Wang, Jian
[1
]
Xie, Zexin
[1
]
Du, Yanlin
[1
]
Song, Wei
[1
]
机构:
[1] Shanghai Ocean Univ, Coll Informat Technol, Shanghai, Peoples R China
来源:
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VIII
|
2025年
/
15038卷
关键词:
Vison Transformer;
remote sensing;
self attention;
super resolution;
Nonlocal neural Network;
D O I:
10.1007/978-981-97-8685-5_16
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
In recent years, deep learning (DL)-based super-resolution techniques for remote sensing images have made significant progress. However, these models have constraints in effectively managing long-range non-local information and reusing features, while also encountering issues such as gradient vanishing and explosion. To overcome these challenges, we propose the Enhanced Hybrid Attention Transformer (EHAT) framework, which is based on the Hybrid Attention Transformer (HAT) network backbone and combines a region-level nonlocal neural network block and a skip fusion network SFN to form a new skip fusion attention group (SFAG). In addition, we form a Multi-attention Block (MAB) by introducing spatial frequency block (SFB) based on fast Fourier convolution. We have conducted extensive experiments on Uc Merced, CLRS and RSSCN7 datasets. The results show that our method improves the PSNR by about 0.2 dB on Uc Mercedx4.
引用
收藏
页码:225 / 237
页数:13
相关论文