Patch-Wise Infrared and Visible Image Fusion Using Spatial Adaptive Weights

被引:4
作者
Minahil, Syeda [1 ]
Kim, Jun-Hyung [1 ]
Hwang, Youngbae [1 ]
机构
[1] Chungbuk Natl Univ, Dept Intelligent Syst & Robot, Cheongju 28644, South Korea
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 19期
基金
新加坡国家研究基金会;
关键词
infrared and visible image fusion; Generative Adversarial Network; patchGAN; dual-discriminator; spatial adaptive weights;
D O I
10.3390/app11199255
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In infrared (IR) and visible image fusion, the significant information is extracted from each source image and integrated into a single image with comprehensive data. We observe that the salient regions in the infrared image contain targets of interests. Therefore, we enforce spatial adaptive weights derived from the infrared images. In this paper, a Generative Adversarial Network (GAN)-based fusion method is proposed for infrared and visible image fusion. Based on the end-to-end network structure with dual discriminators, a patch-wise discrimination is applied to reduce blurry artifact from the previous image-level approaches. A new loss function is also proposed to use constructed weight maps which direct the adversarial training of GAN in a manner such that the informative regions of the infrared images are preserved. Experiments are performed on the two datasets and ablation studies are also conducted. The qualitative and quantitative analysis shows that we achieve competitive results compared to the existing fusion methods.
引用
收藏
页数:12
相关论文
共 34 条
[1]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[2]   A new image quality metric for image fusion: The sum of the correlations of differences [J].
Aslantas, V. ;
Bendes, E. .
AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2015, 69 (12) :160-166
[3]  
Chipman L. J., 1995, Proceedings. International Conference on Image Processing (Cat. No.95CB35819), P248, DOI 10.1109/ICIP.1995.537627
[4]  
Demir U, 2018, PATCH BASED IMAGE IN
[5]  
Goodfellow I., 2020, ADV NEUR IN, V63, P139, DOI [DOI 10.1145/3422622, 10.1145/3422622]
[6]  
Haghighat M, 2014, I C APPL INF COMM TE, P424
[7]  
Ioffe S, 2015, PR MACH LEARN RES, V37, P448
[8]   Image-to-Image Translation with Conditional Adversarial Networks [J].
Isola, Phillip ;
Zhu, Jun-Yan ;
Zhou, Tinghui ;
Efros, Alexei A. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5967-5976
[9]   Medical images fusion by using weighted least squares filter and sparse representation [J].
Jiang, Wei ;
Yang, Xiaomin ;
Wu, Wei ;
Liu, Kai ;
Ahmad, Awais ;
Sangaiah, Arun Kumar ;
Jeon, Gwanggil .
COMPUTERS & ELECTRICAL ENGINEERING, 2018, 67 :252-266
[10]   Pixel- and region-based image fusion with complex wavelets [J].
Lewis, John J. ;
O'Callaghan, Robert J. ;
Nikolov, Stavri G. ;
Bull, David R. ;
Canagarajah, Nishan .
INFORMATION FUSION, 2007, 8 (02) :119-130