Sentinel SAR-optical fusion for crop type mapping using deep learning and Google Earth Engine

被引:136
作者
Adrian, Jarrett [1 ,2 ]
Sagan, Vasit [1 ,2 ]
Maimaitijiang, Maitiniyazi [1 ,2 ]
机构
[1] St Louis Univ, Geospatial Inst, 3694 West Pine Mall, St Louis, MO 63108 USA
[2] St Louis Univ, Dept Earth & Atmospher Sci, 3642 Lindell Blvd, St Louis, MO 63108 USA
基金
美国国家科学基金会; 美国国家航空航天局;
关键词
3D U-Net; Denoising neural networks; Sentinel-1; Sentinel-2; Data fusion; INSTANCE SEGMENTATION; LAND-COVER; CLASSIFICATION; RAPESEED; NETWORK;
D O I
10.1016/j.isprsjprs.2021.02.018
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
Accurate crop type mapping provides numerous benefits for a deeper understanding of food systems and yield prediction. Ever-increasing big data, easy access to high-resolution imagery, and cloud-based analytics platforms like Google Earth Engine have drastically improved the ability for scientists to advance data-driven agriculture with improved algorithms for crop type mapping using remote sensing, computer vision, and machine learning. Crop type mapping techniques mainly relied on standalone SAR and optical imagery, few studies investigated the potential of SAR-optical data fusion, coupled with virtual constellation, and 3-dimensional (3D) deep learning networks. To this extent, we use a deep learning approach that utilizes the denoised backscatter and texture information from multi-temporal Sentinel-1 SAR data and the spectral information from multi-temporal optical Sentinel-2 data for mapping ten different crop types, as well as water, soil and urban area. Multi-temporal Sentinel-1 data was fused with multi-temporal optical Sentinel-2 data in an effort to improve classification accuracies for crop types. We compared the results of the 3D U-Net to the state-of-the-art deep learning networks, including SegNet and 2D U-Net, as well as commonly used machine learning method such as Random Forest. The results showed (1) fusing multi-temporal SAR and optical data yields higher training overall accuracies (OA) (3D U-Net 0.992, 2D U-Net 0.943, SegNet 0.871) and testing OA (3D U-Net 0.941, 2D U-Net 0.847, SegNet 0.643) for crop type mapping compared to standalone multi-temporal SAR or optical data (2) optical data fused with denoised SAR data via a denoising convolution neural network (OA 0.912) performed better for crop type mapping compared to optical data fused with boxcar (OA 0.880), Lee (OA 0.881), and median (OA 0.887) filtered SAR data and (3) 3D convolutional neural networks perform better than 2D convolutional neural networks for crop type mapping (SAR OA 0.912, optical OA 0.937, fused OA 0.992).
引用
收藏
页码:215 / 235
页数:21
相关论文
共 89 条
  • [1] [Anonymous], 2010, CAMBRIDGE DICT STAT
  • [2] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [3] Spatiotemporal Image Fusion in Remote Sensing
    Belgiu, Mariana
    Stein, Alfred
    [J]. REMOTE SENSING, 2019, 11 (07)
  • [4] Estimation of soil moisture patterns in mountain grasslands by means of SAR RADARSAT2 images and hydrological modeling
    Bertoldi, Giacomo
    Della Chiesa, Stefano
    Notarnicola, Claudia
    Pasolli, Luca
    Niedrist, Georg
    Tappeiner, Ulrike
    [J]. JOURNAL OF HYDROLOGY, 2014, 516 : 245 - 257
  • [5] Efficiency of crop identification based on optical and SAR image time series
    Blaes, X
    Vanhalle, L
    Defourny, P
    [J]. REMOTE SENSING OF ENVIRONMENT, 2005, 96 (3-4) : 352 - 365
  • [6] A SAR-Based Index for Landscape Changes in African Savannas
    Braun, Andreas
    Hochschild, Volker
    [J]. REMOTE SENSING, 2017, 9 (04)
  • [7] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [8] The potential to reduce the risk of diffuse pollution from agriculture while improving economic performance at farm level
    Buckley, Cathal
    Carney, Patricia
    [J]. ENVIRONMENTAL SCIENCE & POLICY, 2013, 25 : 118 - 126
  • [9] Hybrid Task Cascade for Instance Segmentation
    Chen, Kai
    Pang, Jiangmiao
    Wang, Jiaqi
    Xiong, Yu
    Li, Xiaoxiao
    Sun, Shuyang
    Feng, Wansen
    Liu, Ziwei
    Shi, Jianping
    Ouyang, Wanli
    Loy, Chen Change
    Lin, Dahua
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4969 - 4978
  • [10] cicek Ozgtin, 2016, INT C MED IM COMP CO, P424, DOI DOI 10.1007/978-3-319-46723-8_49