Dual Attention Feature Fusion and Adaptive Context for Accurate Segmentation of Very High-Resolution Remote Sensing Images

被引:13
作者
Shi, Hao [1 ,2 ,3 ]
Fan, Jiahe [1 ,2 ,3 ]
Wang, Yupei [1 ,2 ,3 ]
Chen, Liang [1 ,2 ,3 ]
机构
[1] Beijing Inst Technol, Sch Informat & Elect, Radar Res Lab, Beijing 100081, Peoples R China
[2] Beijing Key Lab Embedded Real Time Informat Proc, Beijing 100081, Peoples R China
[3] Beijing Inst Technol, Chongqing Innovat Ctr, Chongqing 401120, Peoples R China
关键词
deep learning; land cover classification; semantic segmentation; SEMANTIC SEGMENTATION; CLASSIFICATION; NETWORK;
D O I
10.3390/rs13183715
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Land cover classification of high-resolution remote sensing images aims to obtain pixel-level land cover understanding, which is often modeled as semantic segmentation of remote sensing images. In recent years, convolutional network (CNN)-based land cover classification methods have achieved great advancement. However, previous methods fail to generate fine segmentation results, especially for the object boundary pixels. In order to obtain boundary-preserving predictions, we first propose to incorporate spatially adapting contextual cues. In this way, objects with similar appearance can be effectively distinguished with the extracted global contextual cues, which are very helpful to identify pixels near object boundaries. On this basis, low-level spatial details and high-level semantic cues are effectively fused with the help of our proposed dual attention mechanism. Concretely, when fusing multi-level features, we utilize the dual attention feature fusion module based on both spatial and channel attention mechanisms to relieve the influence of the large gap, and further improve the segmentation accuracy of pixels near object boundaries. Extensive experiments were carried out on the ISPRS 2D Semantic Labeling Vaihingen data and GaoFen-2 data to demonstrate the effectiveness of our proposed method. Our method achieves better performance compared with other state-of-the-art methods.
引用
收藏
页数:18
相关论文
共 64 条
  • [1] [Anonymous], 2012, ADV NEURAL INF PROCE
  • [2] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [3] Dense Decoder Shortcut Connections for Single-Pass Semantic Segmentation
    Bilinski, Piotr
    Prisacariu, Victor
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6596 - 6605
  • [4] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [5] DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
    Chen, Liang-Chieh
    Papandreou, George
    Kokkinos, Iasonas
    Murphy, Kevin
    Yuille, Alan L.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) : 834 - 848
  • [6] Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
  • [7] Context Contrasted Feature and Gated Multi-scale Aggregation for Scene Segmentation
    Ding, Henghui
    Jiang, Xudong
    Shuai, Bing
    Liu, Ai Qun
    Wang, Gang
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2393 - 2402
  • [8] Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture
    Eigen, David
    Fergus, Rob
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2650 - 2658
  • [9] Graph Attention Layer Evolves Semantic Segmentation for Road Pothole Detection: A Benchmark and Algorithms
    Fan, Rui
    Wang, Hengli
    Wang, Yuan
    Liu, Ming
    Pitas, Ioannis
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 8144 - 8154
  • [10] Fan R, 2019, IEEE INT VEH SYM, P474, DOI [10.1109/ivs.2019.8814000, 10.1109/IVS.2019.8814000]