Convolution neural network with edge structure loss for spatiotemporal remote sensing image fusion

被引:9
作者
Lei, Dajiang [1 ]
Bai, Menghao [1 ]
Zhang, Liping [1 ]
Li, Weisheng [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
基金
中国国家自然科学基金;
关键词
spatiotemporal fusion; convolutional neural network; pixel-level loss; spatial details; edge loss; LANDSAT; MODIS;
D O I
10.1080/01431161.2022.2030070
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Spatiotemporal fusion technology provides a feasible, economical solution for generating remote sensing images with high spatiotemporal resolution. The recently proposed learning-based method achieved high accuracy; however, its network structure is relatively simple, and the deep features of the input image cannot be obtained, so that the fused image cannot restore good landform details and the quality is not very good. Moreover, most methods use a single pixel-level (MSE) loss, which makes recovering high-frequency details difficult, resulting in a reduction in the fusion accuracy. In this paper, we propose an edge structure loss, which is added to a spatiotemporal fusion network without pre training model. To fully extract the spectral information and spatial details of the image, we propose a DenseNet-BC module for image fusion tasks, which makes the features more easily transmittable in the whole network. This improvement also enables the network to perform spatiotemporal fusion with better generalizability and robustness. In addition, we propose an edge loss to further improve the accuracy of the model fusion results. Experiments with existing spatiotemporal fusion algorithms in different regions show that our proposed method is more fault tolerant and achieve a higher accuracy in terms of quality evaluation indicators and better visual effects.
引用
收藏
页码:1015 / 1036
页数:22
相关论文
共 38 条
[1]   The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna [J].
Acerbi-Junior, F. W. ;
Clevers, J. G. P. W. ;
Schaepman, M. E. .
INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2006, 8 (04) :278-288
[2]  
[Anonymous], 2014, COMPUT RES REPOSITOR
[3]   Comparison of Spatiotemporal Fusion Models: A Review [J].
Chen, Bin ;
Huang, Bo ;
Xu, Bing .
REMOTE SENSING, 2015, 7 (02) :1798-1835
[4]  
Chen Y., 2021, IEEE T GEOSCI REMOTE
[5]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[6]   On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance [J].
Gao, Feng ;
Masek, Jeff ;
Schwaller, Matt ;
Hall, Forrest .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2006, 44 (08) :2207-2218
[7]   Hypercomplex Quality Assessment of Multi/Hyperspectral Images [J].
Garzelli, Andrea ;
Nencini, Filippo .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2009, 6 (04) :662-665
[8]   A review of remote sensing image fusion methods [J].
Ghassemian, Hassan .
INFORMATION FUSION, 2016, 32 :75-89
[9]   FSDAF 2.0: Improving the performance of retrieving land cover changes and preserving spatial details [J].
Guo, Dizhou ;
Shi, Wenzhong ;
Hao, Ming ;
Zhu, Xiaolin .
REMOTE SENSING OF ENVIRONMENT, 2020, 248
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778