A Deep Learning Method for Building Extraction from Remote Sensing Images by Fuzing Local and Global Features

被引:0
|
作者
Wang, Yitong [1 ]
Wang, Shumin [1 ]
Yuan, Jing [2 ]
Dou, Aixia [1 ]
Gu, Ziying [1 ]
机构
[1] Inst Earthquake Forecasting, CEA, Beijing 100036, Peoples R China
[2] Inst Disaster Prevent, Sch Informat Engn, Langfang 065201, Peoples R China
基金
中国国家自然科学基金;
关键词
All Open Access; Gold;
D O I
10.1155/2024/5575787
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As important disaster-bearing bodies, buildings are the focus of attention in seismic disaster risk assessment and emergency rescue. It is of great practical significance to extract buildings quickly and accurately with complex textures and variable scales and shapes from high-resolution remote sensing images. We proposed an improved TransUnet model based on multiscale grouped convolution and attention named MATUnet to retain more local detail features and enhance the representation ability of global features, while reducing the network parameters. We designed the multiscale grouped convolutional feature extraction module with attention (GAM) to enhance the representation of detailed features. The convolutional positional encoding module (PEG) was added to redetermine the number of transformer, it solved the problem of local feature information loss and the difficulty of convergence of the network. The channel attention module (CAM) of the decoder enhanced the salient information of the features and solved the problem of information redundancy after feature fusion. We experimented through MATUnet on the WHU building dataset and Massachusetts dataset. MATUnet achieved the best IOU results of 92.14% and 83.22%, respectively, and achieved better than the other generalized and state-of-the-art networks under the same conditions. We also have achieved good segmentation results on the GF2 Xichang building dataset.
引用
收藏
页数:26
相关论文
empty
未找到相关数据