FusionGAN: A generative adversarial network for infrared and visible image fusion

被引:1438
作者
Ma, Jiayi [1 ]
Yu, Wei [1 ]
Liang, Pengwei [1 ]
Li, Chang [2 ]
Jiang, Junjun [3 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Hubei, Peoples R China
[2] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Anhui, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Heilongjiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Infrared image; Visible image; Generative adversarial network; Deep learning; MULTISCALE-DECOMPOSITION; PERFORMANCE; TRANSFORM; LIGHT; FOCUS;
D O I
10.1016/j.inffus.2018.09.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Infrared images can distinguish targets from their backgrounds on the basis of difference in thermal radiation, which works well at all day/night time and under all weather conditions. By contrast, visible images can provide texture details with high spatial resolution and definition in a manner consistent with the human visual system. This paper proposes a novel method to fuse these two types of information using a generative adversarial network, termed as FusionGAN. Our method establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients, and the discriminator aims to force the fused image to have more details existing in visible images. This enables that the final fused image simultaneously keeps the thermal radiation in an infrared image and the textures in a visible image. In addition, our FusionGAN is an end-to-end model, avoiding manually designing complicated activity level measurements and fusion rules as in traditional methods. Experiments on public datasets demonstrate the superiority of our strategy over state-of-the-arts, where our results look like sharpened infrared images with clear highlighted targets and abundant details. Moreover, we also generalize our FusionGAN to fuse images with different resolutions, say a low-resolution infrared image and a high-resolution visible image. Extensive results demonstrate that our strategy can generate clear and clean fused images which do not suffer from noise caused by upsampling of infrared information.
引用
收藏
页码:11 / 26
页数:16
相关论文
共 49 条
[1]  
[Anonymous], INTERNATIONAL JOURNA
[2]  
[Anonymous], 2017, ARXIV170107875
[3]  
[Anonymous], ARXIV 1712 09923V1
[4]  
[Anonymous], ARXIV 1511 07122V1
[5]  
Bavirisetti D. P., 2017, INT C INFORM FUSION, P1
[6]   Two-scale image fusion of visible and infrared images using saliency detection [J].
Bavirisetti, Durga Prasad ;
Dhuli, Ravindra .
INFRARED PHYSICS & TECHNOLOGY, 2016, 76 :52-64
[7]   SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework [J].
Chen, Chen ;
Li, Yeqing ;
Liu, Wei ;
Huang, Junzhou .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) :4213-4224
[8]  
Deshmukh M., 2010, International Journal of Image Processing (IJIP), V4, P484
[9]   From Multi-Scale Decomposition to Non-Multi-Scale Decomposition Methods: A Comprehensive Survey of Image Fusion Techniques and Its Applications [J].
Dogra, Ayush ;
Goyal, Bhawna ;
Agrawal, Sunil .
IEEE ACCESS, 2017, 5 :16040-16067
[10]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965