Remote Sensing Image Fusion Based on Generative Adversarial Network with Multi-stream Fusion Architecture

被引:0
作者
Lei D. [1 ]
Zhang C. [1 ]
Li Z. [1 ]
Wu Y. [2 ]
机构
[1] College of Computer, Chongqing University of Posts and Telecommunications, Chongqing
[2] Institute of Web Intelligence, Chongqing University of Posts and Telecommunications, Chongqing
来源
Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology | 2020年 / 42卷 / 08期
关键词
Computer vision; Generative adversarial network; Multi-stream fusion architecture; Remote sensing image fusion;
D O I
10.11999/JEIT17_190273
中图分类号
学科分类号
摘要
The generative adversarial network receives extensive attention in the study of computer vision such as image fusion and image super-resolution, due to its strong ability of generating high quality images. At present, the remote sensing image fusion method based on generative adversarial network only learns the mapping between the images, and lacks the unique Pan-sharpening domain knowledge. This paper proposes a remote sensing image fusion method based on optimized generative adversarial network with the integration of the spatial structure information of panchromatic image. The proposed algorithm extracts the spatial structure information of the panchromatic image by the gradient operator. The extracted feature would be added to both the discriminator and the generator which uses a multi-stream fusion architecture. The corresponding optimization objective and fusion rules are then designed to improve the quality of the fused image. Experiments on images acquired by WorldView-3 satellites demonstrate that the proposed method can generate high quality fused images, which is better than the most of advanced remote sensing image fusion methods in both subjective visual and objective evaluation indicators.
引用
收藏
页码:1942 / 1949
页数:7
相关论文
共 20 条
  • [1] THOMAS C, RANCHIN T, WALD L, Et al., Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics, IEEE Transactions on Geoscience and Remote Sensing, 46, 5, pp. 1301-1312, (2008)
  • [2] LIU Pengfei, XIAO Liang, ZHANG Jun, Et al., Spatial-hessian-feature-guided variational model for pan-sharpening, IEEE Transactions on Geoscience and Remote Sensing, 54, 4, pp. 2235-2253, (2016)
  • [3] JI Feng, LI Zeren, CHANG Xia, Et al., Remote sensing image fusion method based on PCA and NSCT transform, Journal of Graphics, 38, 2, pp. 247-252, (2017)
  • [4] RAHMANI S, STRAIT M, MERKURJEV D, Et al., An adaptive IHS Pan-sharpening method, IEEE Geoscience and Remote Sensing Letters, 7, 4, pp. 746-750, (2010)
  • [5] GARZELLI A, NENCINI F, CAPOBIANCO L., Optimal MMSE Pan sharpening of very high resolution multispectral images, IEEE Transactions on Geoscience and Remote Sensing, 46, 1, pp. 228-236, (2008)
  • [6] RANCHIN T, WALD L., Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation, Photogrammetric Engineering and Remote Sensing, 66, 1, pp. 49-61, (2000)
  • [7] XIAO Huachao, ZHOU Quan, ZHENG Xiaosong, A fusion method of satellite remote sensing image based on IHS transform and Curvelet transform, Journal of South China University of Technology:Natural Science Edition, 44, 1, pp. 58-64, (2016)
  • [8] ZENG Delu, HU Yuwen, HUANG Yue, Et al., Pan-sharpening with structural consistency and ℓ<sub>1/2</sub> gradient prior, Remote Sensing Letters, 7, 12, pp. 1170-1179, (2016)
  • [9] LIU Yu, CHEN Xun, WANG Zengfu, Et al., Deep learning for pixel-level image fusion: Recent advances and future prospects, Information Fusion, 42, pp. 158-173, (2018)
  • [10] YANG Junfeng, FU Xueyang, HU Yuwen, Et al., PanNet: A deep network architecture for pan-sharpening, 2017 IEEE International Conference on Computer Vision, pp. 1753-1761, (2017)