HMAFNet: Hybrid Mamba-Attention Fusion Network for Remote Sensing Image Semantic Segmentation

被引:1
作者
Sun, Haoyue [1 ]
Liu, Jianjun [1 ]
Yang, Jinlong [1 ]
Wu, Zebin [2 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Jiangsu Prov Engn Lab Pattern Recognit & Computat, Wuxi 214122, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Semantic segmentation; Decoding; Visualization; Vectors; Convolution; Transformers; Remote sensing; Data mining; Computational modeling; Cross-attention; global feature representation; Mamba; remote sensing (RS) image; semantic segmentation; CLASSIFIER;
D O I
10.1109/LGRS.2025.3554786
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Remote sensing (RS) images have rich ground information, diverse object types, and large-scale differences, and these characteristics make difficulties in achieving precise segmentation. Recently, the state-space model improved by Mamba offers global modeling capability while maintaining linear computational complexity. However, it still faces issues with insufficient extraction of global information specific to the spatial and channel dimensions, which is crucial for achieving accurate segmentation, along with lacking sensitivity to local details. Based on this, we propose a hybrid Mamba-attention fusion network (HMAFNet) for RS image semantic segmentation, based on the encoder-decoder architecture. Specifically, the encoder incorporates the spatial-channel Mamba (SCMamba) module, which uses the Mamba to efficiently capture global feature representations across both spatial and channel dimensions. Meanwhile, local information essential for the encoding phase is supplemented by a parallel convolutional branch. In the decoding phase, we propose the information-guided cross fusion (IGCF) module, which generates corresponding features via convolution-based and Mamba-based information-guided branches. The cross-attention mechanism facilitates the interaction and fusion between the features, thereby preserving elaborated details and further eliminating semantic differences. Extensive comparison experiments and ablation experiments on both the Vaihingen and Potsdam datasets show that our proposed HAMFNet can achieve better segmentation results.
引用
收藏
页数:5
相关论文
共 16 条
[1]  
Chen J., 2021, PREPRINT
[2]   Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification [J].
Guo, Yiqing ;
Jia, Xiuping ;
Paull, David .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (06) :3036-3048
[3]   Swin Transformer Embedding UNet for Remote Sensing Image Semantic Segmentation [J].
He, Xin ;
Zhou, Yong ;
Zhao, Jiaqi ;
Zhang, Di ;
Yao, Rui ;
Xue, Yong .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[4]   Multistage Attention ResU-Net for Semantic Segmentation of Fine-Resolution Remote Sensing Images [J].
Li, Rui ;
Zheng, Shunyi ;
Duan, Chenxi ;
Su, Jianlin ;
Zhang, Ce .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
[5]   ABCNet: Attentive bilateral contextual network for efficient semantic segmentation of Fine-Resolution remotely sensed imagery [J].
Li, Rui ;
Zheng, Shunyi ;
Zhang, Ce ;
Duan, Chenxi ;
Wang, Libo ;
Atkinson, Peter M. .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2021, 181 :84-98
[6]  
Liu Y, 2024, Arxiv, DOI [arXiv:2401.10166, DOI 10.48550/ARXIV.2401.10166]
[7]   Seismic Coherent Noise Removal With Residual Network and Synthetic Seismic Samples [J].
Ma, Xiao ;
Yao, Gang ;
Yuan, Sanyi ;
Zhang, Feng ;
Wu, Di .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21 :1-5
[8]   Random forest classifier for remote sensing classification [J].
Pal, M .
INTERNATIONAL JOURNAL OF REMOTE SENSING, 2005, 26 (01) :217-222
[9]   U-Net: Convolutional Networks for Biomedical Image Segmentation [J].
Ronneberger, Olaf ;
Fischer, Philipp ;
Brox, Thomas .
MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, PT III, 2015, 9351 :234-241
[10]  
Ruan JC, 2024, Arxiv, DOI [arXiv:2402.02491, DOI 10.48550/ARXIV.2402.02491]