Efficient Depth Fusion Transformer for Aerial Image Semantic Segmentation

被引:22
作者
Yan, Li [1 ,2 ]
Huang, Jianming [1 ]
Xie, Hong [1 ]
Wei, Pengcheng [1 ]
Gao, Zhao [2 ]
机构
[1] Wuhan Univ, Sch Geodesy & Geomat, Wuhan 430079, Peoples R China
[2] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Peoples R China
关键词
semantic segmentation; self-attention; depth fusion; transformer; RESOLUTION; RGB;
D O I
10.3390/rs14051294
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Taking depth into consideration has been proven to improve the performance of semantic segmentation through providing additional geometry information. Most existing works adopt a two-stream network, extracting features from color images and depth images separately using two branches of the same structure, which suffer from high memory and computation costs. We find that depth features acquired by simple downsampling can also play a complementary part in the semantic segmentation task, sometimes even better than the two-stream scheme with the same two branches. In this paper, a novel and efficient depth fusion transformer network for aerial image segmentation is proposed. The presented network utilizes patch merging to downsample depth input and a depth-aware self-attention (DSA) module is designed to mitigate the gap caused by difference between two branches and two modalities. Concretely, the DSA fuses depth features and color features by computing depth similarity and impact on self-attention map calculated by color feature. Extensive experiments on the ISPRS 2D semantic segmentation dataset validate the efficiency and effectiveness of our method. With nearly half the parameters of traditional two-stream scheme, our method acquires 83.82% mIoU on Vaihingen dataset outperforming other state-of-the-art methods and 87.43% mIoU on Potsdam dataset comparable to the state-of-the-art.
引用
收藏
页数:18
相关论文
共 43 条
[1]  
[Anonymous], 2016, ICLR
[2]   Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks [J].
Audebert, Nicolas ;
Le Saux, Bertrand ;
Lefevre, Sebastien .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2018, 140 :20-32
[3]   Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks [J].
Audebert, Nicolas ;
Le Saux, Bertrand ;
Lefevre, Sebastien .
COMPUTER VISION - ACCV 2016, PT I, 2017, 10111 :180-196
[4]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[5]  
Chen KQ, 2019, INT GEOSCI REMOTE SE, P3911, DOI [10.1109/IGARSS.2019.8899217, 10.1109/igarss.2019.8899217]
[6]   Edge-Aware Convolution for RGB-D Image Segmentation [J].
Chen, Rongsen ;
Zhang, Fang-Lue ;
Rhee, Taehyun .
2020 35TH INTERNATIONAL CONFERENCE ON IMAGE AND VISION COMPUTING NEW ZEALAND (IVCNZ), 2020,
[7]   Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation [J].
Chen, Xiaokang ;
Lin, Kwan-Yee ;
Wang, Jingbo ;
Wu, Wayne ;
Qian, Chen ;
Li, Hongsheng ;
Zeng, Gang .
COMPUTER VISION - ECCV 2020, PT XI, 2020, 12356 :561-577
[8]   Locality-Sensitive Deconvolution Networks with Gated Fusion for RGB-D Indoor Semantic Segmentation [J].
Cheng, Yanhua ;
Cai, Rui ;
Li, Zhiwei ;
Zhao, Xin ;
Huang, Kaiqi .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1475-1483
[9]  
Dosovitskiy A., 2021, P 9 INT C LEARN REPR
[10]   A survey on indoor RGB-D semantic segmentation: from hand-crafted features to deep convolutional neural networks [J].
Fooladgar, Fahimeh ;
Kasaei, Shohreh .
MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (7-8) :4499-4524