A deep translation (GAN) based change detection network for optical and SAR remote sensing images

被引:267
作者
Li, Xinghua [1 ]
Du, Zhengshun [1 ]
Huang, Yanyuan [1 ]
Tan, Zhenyu [2 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430079, Peoples R China
[2] Northwest Univ, Coll Urban & Environm Sci, Xian 710127, Peoples R China
基金
中国国家自然科学基金;
关键词
Change detection; Deep translation; Depthwise separable convolution; GAN; Multi-scale loss; Optical and SAR images; CLASSIFICATION;
D O I
10.1016/j.isprsjprs.2021.07.007
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
With the development of space-based imaging technology, a larger and larger number of images with different modalities and resolutions are available. The optical images reflect the abundant spectral information and geometric shape of ground objects, whose qualities are degraded easily in poor atmospheric conditions. Although synthetic aperture radar (SAR) images cannot provide the spectral features of the region of interest (ROI), they can capture all-weather and all-time polarization information. In nature, optical and SAR images encapsulate lots of complementary information, which is of great significance for change detection (CD) in poor weather situations. However, due to the difference in imaging mechanisms of optical and SAR images, it is difficult to conduct their CD directly using the traditional difference or ratio algorithms. Most recent CD methods bring image translation to reduce their difference, but the results are obtained by ordinary algebraic methods and threshold segmentation with limited accuracy. Towards this end, this work proposes a d eep translation based change detection network (DTCDN) for optical and SAR images. The deep translation firstly maps images from one domain (e.g., optical) to another domain (e.g., SAR) through a cyclic structure into the same feature space. With the similar characteristics after deep translation, they become comparable. Different from most previous researches, the translation results are imported to a supervised CD network that utilizes deep context features to separate the unchanged pixels and changed pixels. In the experiments, the proposed DTCDN was tested on four representative data sets from Gloucester, California, and Shuguang village. Compared with state-of-the-art methods, the effectiveness and robustness of the proposed method were confirmed.
引用
收藏
页码:14 / 34
页数:21
相关论文
共 73 条
  • [1] Similarity Measures of Remotely Sensed Multi-Sensor Images for Change Detection Applications
    Alberga, Vito
    [J]. REMOTE SENSING, 2009, 1 (03) : 122 - 143
  • [2] Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X
    Ao, Dongyang
    Dumitru, Corneliu Octavian
    Schwarz, Gottfried
    Datcu, Mihai
    [J]. REMOTE SENSING, 2018, 10 (10)
  • [3] Ayhan B, 2019, 2019 IEEE 10TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), P192, DOI 10.1109/UEMCON47517.2019.8993038
  • [4] Binkowski M., 2018, INT C LEARNING REPRE
  • [5] A systematic study of the class imbalance problem in convolutional neural networks
    Buda, Mateusz
    Maki, Atsuto
    Mazurowski, Maciej A.
    [J]. NEURAL NETWORKS, 2018, 106 : 249 - 259
  • [6] A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection
    Chen, Hao
    Shi, Zhenwei
    [J]. REMOTE SENSING, 2020, 12 (10)
  • [7] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [8] Chen R., 2020, P IEEE CVF C COMP VI, P8168
  • [9] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [10] Daudt RC, 2018, IEEE IMAGE PROC, P4063, DOI 10.1109/ICIP.2018.8451652