Patch-Wise Infrared and Visible Image Fusion Using Spatial Adaptive Weights

被引:4
作者
Minahil, Syeda [1 ]
Kim, Jun-Hyung [1 ]
Hwang, Youngbae [1 ]
机构
[1] Chungbuk Natl Univ, Dept Intelligent Syst & Robot, Cheongju 28644, South Korea
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 19期
基金
新加坡国家研究基金会;
关键词
infrared and visible image fusion; Generative Adversarial Network; patchGAN; dual-discriminator; spatial adaptive weights;
D O I
10.3390/app11199255
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In infrared (IR) and visible image fusion, the significant information is extracted from each source image and integrated into a single image with comprehensive data. We observe that the salient regions in the infrared image contain targets of interests. Therefore, we enforce spatial adaptive weights derived from the infrared images. In this paper, a Generative Adversarial Network (GAN)-based fusion method is proposed for infrared and visible image fusion. Based on the end-to-end network structure with dual discriminators, a patch-wise discrimination is applied to reduce blurry artifact from the previous image-level approaches. A new loss function is also proposed to use constructed weight maps which direct the adversarial training of GAN in a manner such that the informative regions of the infrared images are preserved. Experiments are performed on the two datasets and ablation studies are also conducted. The qualitative and quantitative analysis shows that we achieve competitive results compared to the existing fusion methods.
引用
收藏
页数:12
相关论文
共 34 条
[21]   Infrared and visible image fusion methods and applications: A survey [J].
Ma, Jiayi ;
Ma, Yong ;
Li, Chang .
INFORMATION FUSION, 2019, 45 :153-178
[22]   Least Squares Generative Adversarial Networks [J].
Mao, Xudong ;
Li, Qing ;
Xie, Haoran ;
Lau, Raymond Y. K. ;
Wang, Zhen ;
Smolley, Stephen Paul .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2813-2821
[23]  
Mirza M., 2014, ARXIV14111784
[24]   DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs [J].
Prabhakar, K. Ram ;
Srikar, V. Sai ;
Babu, R. Venkatesh .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4724-4732
[25]  
Radford A., 2015, ARXIV151106434
[26]   U-Net: Convolutional Networks for Biomedical Image Segmentation [J].
Ronneberger, Olaf ;
Fischer, Philipp ;
Brox, Thomas .
MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, PT III, 2015, 9351 :234-241
[27]   IHS-GTF: A Fusion Method for Optical and Synthetic Aperture Radar Data [J].
Shao, Zhenfeng ;
Wu, Wenfu ;
Guo, Songjing .
REMOTE SENSING, 2020, 12 (17) :1-20
[28]   Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review [J].
Sun, Changqi ;
Zhang, Cong ;
Xiong, Naixue .
ELECTRONICS, 2020, 9 (12) :1-24
[29]   Image Text Deblurring Method Based on Generative Adversarial Network [J].
Wu, Chunxue ;
Du, Haiyan ;
Wu, Qunhui ;
Zhang, Sheng .
ELECTRONICS, 2020, 9 (02)
[30]   A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain [J].
Xiang, Tianzhu ;
Yan, Li ;
Gao, Rongrong .
INFRARED PHYSICS & TECHNOLOGY, 2015, 69 :53-61