MUSTFN: A spatiotemporal fusion method for multi-scale and multi-sensor remote sensing images based on a convolutional neural network

被引:14
|
作者
Qin, Peng [1 ,2 ]
Huang, Huabing [1 ,2 ,3 ,4 ]
Tang, Hailong [1 ,2 ]
Wang, Jie [3 ]
Liu, Chong [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Geospatial Engn & Sci, Zhuhai 519082, Peoples R China
[2] Southern Marine Sci & Engn Guangdong Lab Zhuhai, Zhuhai 519082, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518066, Peoples R China
[4] Int Res Ctr Big Data Sustainable Dev Goals, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Spatiotemporal fusion; CNN; Multi-sensor satellite data; Large-area image fusion; Multi-scale fusion scenarios; SURFACE REFLECTANCE; CLOUD REMOVAL; SATELLITE IMAGES; LANDSAT; MODIS; SERIES; INDEX;
D O I
10.1016/j.jag.2022.103113
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Spatiotemporal data fusion is a commonly-used and well-proven technique to enhance the application potential of multi-source remote sensing images. However, most existing methods have trouble in generating quality fusion results when areas covered by the images undergoes rapid land cover changes or images have substantial registration errors. While deep learning algorithms have demonstrated their capabilities for imagery fusion, it is challenging to apply deep-learning-based fusion methods in regions that experiences persistent cloud covers and have limited cloud-free imagery observations. To address these challenges, we developed a Multi-scene Spatiotemporal Fusion Network (MUSTFN) algorithm based on a Convolutional Neural Network (CNN). Our approach uses multi-level features to fuse images at different resolutions acquired by multiple sensors. Furthermore, MUSTFN uses the multi-scale features to overcome the effects of geometric registration errors between different images. Additionally, a multi-constrained loss function is proposed to improve the accuracy of imagery fusion over large areas and solve fusion and gap-filling problems simultaneously by utilizing cloud-contaminated images with the fine-tuning method. Compared with several commonly-used methods, our pro-posed MUSTFN performs better in fusing the 30-m Landsat-7 images and 500-m MODIS images over a small area that has undergone large changes (the average relative Mean Absolute Errors (rMAE) of the first four bands are 6.8% by MUSTFN as compared to 14.1% by the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), 12.8% by the Flexible Spatiotemporal Data Fusion (FSDAF), 8.4% by the Extended Super-Resolution Convolutional Neural Network (ESRCNN), 8.1% by the Spatiotemporal Fusion Using a Generative Adversarial Network (STFGAN)). In particularly for images at different resolutions with different registration accuracies (e.g., 16-m Chinese GaoFen-1 and 500-m MODIS), MUSTFN achieved fusion results of good quality with an average rMAE of 9.3% in spectral reflectance at the first four bands. Finally, we demonstrated the applicability of MUSTFN (average rMAE of 9.18%) when fusing long-term Landsat-8 composite images and MODIS images over a large region (830 km x 600 km). Overall, our results suggest the effectiveness of MUSTFN to address the challenges in imagery fusion, including rapid land cover changes between image acquisition dates, geometric misregistration between images and limited availabilities of cloud-free images. The program of MUSTFN is freely available at: https://github.com/qpyeah/MUSTFN.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Multi-Scale Residual Convolutional Neural Network for Haze Removal of Remote Sensing Images
    Jiang, Hou
    Lu, Ning
    REMOTE SENSING, 2018, 10 (06)
  • [2] MSFusion: Multistage for Remote Sensing Image Spatiotemporal Fusion Based on Texture Transformer and Convolutional Neural Network
    Yang, Guangqi
    Qian, Yurong
    Liu, Hui
    Tang, Bochuan
    Qi, Ranran
    Lu, Yi
    Geng, Jun
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2022, 15 : 4653 - 4666
  • [3] Enblending Mosaicked Remote Sensing Images With Spatiotemporal Fusion of Convolutional Neural Networks
    Wei, Jingbo
    Tang, Wenchao
    He, Chaoqi
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 5891 - 5902
  • [4] Multi-scale Adaptive Feature Fusion Network for Semantic Segmentation in Remote Sensing Images
    Shang, Ronghua
    Zhang, Jiyu
    Jiao, Licheng
    Li, Yangyang
    Marturi, Naresh
    Stolkin, Rustam
    REMOTE SENSING, 2020, 12 (05)
  • [5] Spatiotemporal Fusion of Remote Sensing Images using a Convolutional Neural Network with Attention and Multiscale Mechanisms
    Li, Weisheng
    Zhang, Xiayan
    Peng, Yidong
    Dong, Meilin
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2021, 42 (06) : 1973 - 1993
  • [6] MSNet: A Multi-Stream Fusion Network for Remote Sensing Spatiotemporal Fusion Based on Transformer and Convolution
    Li, Weisheng
    Cao, Dongwen
    Peng, Yidong
    Yang, Chao
    REMOTE SENSING, 2021, 13 (18)
  • [7] A Network Intrusion Detection Method Based on Deep Multi-scale Convolutional Neural Network
    Xiaowei Wang
    Shoulin Yin
    Hang Li
    Jiachi Wang
    Lin Teng
    International Journal of Wireless Information Networks, 2020, 27 : 503 - 517
  • [8] A Network Intrusion Detection Method Based on Deep Multi-scale Convolutional Neural Network
    Wang, Xiaowei
    Yin, Shoulin
    Li, Hang
    Wang, Jiachi
    Teng, Lin
    INTERNATIONAL JOURNAL OF WIRELESS INFORMATION NETWORKS, 2020, 27 (04) : 503 - 517
  • [9] An Image Compression Framework Based on Multi-scale Convolutional Neural Network for Deformation Images
    Liu, Zhenbing
    Li, Xinlong
    Li, Weiwei
    Lan, Rushi
    Luo, Xiaonan
    2019 TENTH INTERNATIONAL CONFERENCE ON INTELLIGENT CONTROL AND INFORMATION PROCESSING (ICICIP), 2019, : 174 - 179
  • [10] Multi-scale object detection in remote sensing imagery with convolutional neural networks
    Deng, Zhipeng
    Sun, Hao
    Zhou, Shilin
    Zhao, Juanping
    Lei, Lin
    Zou, Huanxin
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2018, 145 : 3 - 22