A Learning Strategy for Amazon Deforestation Estimations Using Multi-Modal Satellite Imagery

被引:0
|
作者
Lee, Dongoo [1 ]
Choi, Yeonju [1 ]
机构
[1] Korea Aerosp Res Inst, Daejeon 34133, South Korea
关键词
deforestation; remote sensing; multi-modal dataset; many-to-one mask; multi-view learning; LANDSAT TIME-SERIES; DISTURBANCES; DEGRADATION; FORESTS;
D O I
10.3390/rs15215167
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Estimations of deforestation are crucial as increased levels of deforestation induce serious environmental problems. However, it is challenging to perform investigations over extensive areas, such as the Amazon rainforest, due to the vast size of the region and the difficulty of direct human access. Satellite imagery can be used as an effective solution to this problem; combining optical images with synthetic aperture radar (SAR) images enables deforestation monitoring over large areas irrespective of weather conditions. In this study, we propose a learning strategy for multi-modal deforestation estimations on this basis. Images from three different satellites, Sentinel-1, Sentinel-2, and Landsat 8, were utilized to this end. The proposed algorithm overcomes visibility limitations due to a long rainy season of the Amazon by creating a multi-modal dataset using supplementary SAR images, achieving high estimation accuracy. The dataset is composed of satellite data taken on a daily basis with relatively less monthly generated, ground truth masking data, which is called the many-to-one-mask condition. The Normalized Difference Vegetation Index and Normalized Difference Soil Index bands are selected to comprise the datasets. This yields better detection performance and a shorter training time than datasets consisting of RGB or all bands. Multiple deep neural networks are independently trained for each modality and an appropriate fusion method is developed to detect deforestation. The proposed method utilizes the distance similarity of the predicted deforestation rate to filter prediction results. The elements with high degrees of similarity are merged into the final result with average and denoising operations. The performances of five network variants of the U-Net family are compared, with Attention U-Net observed to exhibit the best prediction results. Finally, the proposed method is utilized to estimate the deforestation status of novel queries with high accuracy.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Multi-modal contrastive learning of subcellular organization using DICE
    Nasser, Rami
    Schaffer, Leah, V
    Ideker, Trey
    Sharan, Roded
    BIOINFORMATICS, 2024, 40 : ii105 - ii110
  • [22] Modelling multi-modal learning in a hawkmoth
    Balkenius, Anna
    Kelber, Almut
    Balkenius, Christian
    FROM ANIMALS TO ANIMATS 9, PROCEEDINGS, 2006, 4095 : 422 - 433
  • [23] Multi-modal Network Representation Learning
    Zhang, Chuxu
    Jiang, Meng
    Zhang, Xiangliang
    Ye, Yanfang
    Chawla, Nitesh, V
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3557 - 3558
  • [24] Multi-modal Hate Speech Detection using Machine Learning
    Boishakhi, Fariha Tahosin
    Shill, Ponkoj Chandra
    Alam, Md Golam Rabiul
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 4496 - 4499
  • [25] MaPLe: Multi-modal Prompt Learning
    Khattak, Muhammad Uzair
    Rasheed, Hanoona
    Maaz, Muhammad
    Khan, Salman
    Khan, Fahad Shahbaz
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19113 - 19122
  • [26] Multi-Modal Convolutional Dictionary Learning
    Gao, Fangyuan
    Deng, Xin
    Xu, Mai
    Xu, Jingyi
    Dragotti, Pier Luigi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 1325 - 1339
  • [27] Multi-modal knowledge base generation from very high resolution satellite imagery for habitat mapping
    Manakos, Ioannis
    Technitou, Eleanna
    Petrou, Zisis
    Karydas, Christos
    Tomaselli, Valeria
    Veronico, Giuseppe
    Mountrakis, Giorgos
    EUROPEAN JOURNAL OF REMOTE SENSING, 2016, 49 : 1033 - 1060
  • [28] QMLS: quaternion mutual learning strategy for multi-modal brain tumor segmentation
    Deng, Zhengnan
    Huang, Guoheng
    Yuan, Xiaochen
    Zhong, Guo
    Lin, Tongxu
    Pun, Chi-Man
    Huang, Zhixin
    Liang, Zhixin
    PHYSICS IN MEDICINE AND BIOLOGY, 2024, 69 (01):
  • [29] Variational interpolation of multi-modal ocean satellite images
    Ba, Sileye O.
    Corpetti, Thomas
    Chapron, Bertrand
    Fablet, Ronan
    TRAITEMENT DU SIGNAL, 2012, 29 (3-5) : 433 - 454
  • [30] Real-time estimations of multi-modal frequencies for smart structures
    Rew, KH
    Kim, S
    Lee, I
    Park, Y
    SMART MATERIALS AND STRUCTURES, 2002, 11 (01) : 36 - 47