Swin Transformer Embedding UNet for Remote Sensing Image Semantic Segmentation

被引:496
作者
He, Xin [1 ,2 ]
Zhou, Yong [1 ,2 ]
Zhao, Jiaqi [1 ,2 ]
Zhang, Di [1 ,2 ]
Yao, Rui [1 ,2 ]
Xue, Yong [3 ,4 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Jiangsu, Peoples R China
[2] Minist Educ Peoples Republ China, Engn Res Ctr Mine Digitizat, Xuzhou 221116, Jiangsu, Peoples R China
[3] China Univ Min & Technol, Sch Environm Sci & Spatial Informat, Xuzhou 221116, Jiangsu, Peoples R China
[4] Univ Derby, Sch Elect Comp & Math, Derby DE22 1GB, England
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2022年 / 60卷
基金
中国国家自然科学基金;
关键词
Transformers; Semantics; Image segmentation; Feature extraction; Convolutional neural networks; Remote sensing; Task analysis; Global information embedding; remote sensing (RS); semantic segmentation; Swin transformer; CLASSIFICATION; RECOGNITION;
D O I
10.1109/TGRS.2022.3144165
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Global context information is essential for the semantic segmentation of remote sensing (RS) images. However, most existing methods rely on a convolutional neural network (CNN), which is challenging to directly obtain the global context due to the locality of the convolution operation. Inspired by the Swin transformer with powerful global modeling capabilities, we propose a novel semantic segmentation framework for RS images called ST-U-shaped network (UNet), which embeds the Swin transformer into the classical CNN-based UNet. ST-UNet constitutes a novel dual encoder structure of the Swin transformer and CNN in parallel. First, we propose a spatial interaction module (SIM), which encodes spatial information in the Swin transformer block by establishing pixel-level correlation to enhance the feature representation ability of occluded objects. Second, we construct a feature compression module (FCM) to reduce the loss of detailed information and condense more small-scale features in patch token downsampling of the Swin transformer, which improves the segmentation accuracy of small-scale ground objects. Finally, as a bridge between dual encoders, a relational aggregation module (RAM) is designed to integrate global dependencies from the Swin transformer into the features from CNN hierarchically. Our ST-UNet brings significant improvement on the ISPRS-Vaihingen and Potsdam datasets, respectively. The code will be available at <uri>https://github.com/XinnHe/ST-UNet</uri>.
引用
收藏
页数:15
相关论文
共 73 条
[1]  
[Anonymous], Isprs 2d semantic labeling contest
[2]  
[Anonymous], 2018, P IEEE CVF C COMP VI, DOI [DOI 10.1109/TPAMI.2019.2913372, 10.1109/TPAMI.2019.2913372]
[3]   Deep semantic segmentation of natural and medical images: a review [J].
Asgari Taghanaki, Saeid ;
Abhishek, Kumar ;
Cohen, Joseph Paul ;
Cohen-Adad, Julien ;
Hamarneh, Ghassan .
ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (01) :137-178
[4]   Polarimetric SAR Image Semantic Segmentation With 3D Discrete Wavelet Transform and Markov Random Field [J].
Bi, Haixia ;
Xu, Lin ;
Cao, Xiangyong ;
Xue, Yong ;
Xu, Zongben .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :6601-6614
[5]   An Active Deep Learning Approach for Minimally Supervised PolSAR Image Classification [J].
Bi, Haixia ;
Xu, Feng ;
Wei, Zhiqiang ;
Xue, Yong ;
Xu, Zongben .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (11) :9378-9395
[6]  
Cao H., 2021, European Conference on Computer Vision, P1
[7]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[8]  
Chen C., 2021, CoRR
[9]   Pre-Trained Image Processing Transformer [J].
Chen, Hanting ;
Wang, Yunhe ;
Guo, Tianyu ;
Xu, Chang ;
Deng, Yiping ;
Liu, Zhenhua ;
Ma, Siwei ;
Xu, Chunjing ;
Xu, Chao ;
Gao, Wen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :12294-12305
[10]  
Chen J., 2021, arXiv preprint arXiv:2, DOI DOI 10.48550/ARXIV.2102.04306