MrFDDGAN: Multireceptive Field Feature Transfer and Dual Discriminator-Driven Generative Adversarial Network for Infrared and Color Visible Image Fusion

被引:20
作者
Li, Junwu [1 ]
Li, Binhua [1 ]
Jiang, Yaoxi [1 ]
Tian, Longwei [2 ]
Cai, Weiwei [3 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Peoples R China
[2] Shanghai Jiao Tong Univ, Shanghai Key Lab Nav & Locat Based Serv, Shanghai 200240, Peoples R China
[3] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Image fusion; Generative adversarial networks; Kernel; Data mining; Task analysis; Generators; Dual discriminator generative adversarial network (GAN); feature interaction; feature reuse; infrared and color visible image fusion; MrFDDGAN; multireceptive field feature transfer;
D O I
10.1109/TIM.2023.3241999
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Most of the previous infrared and visible image fusion methods based on deep learning were on the basis of gray-scale images and used the single convolution kernel receptive field to extract deep features, which would inevitably cause information loss in the process of feature transfer. Accordingly, this article proposes a new generative adversarial network fusion architecture based on multireceptive field feature transfer and dual discriminator, which is named MrFDDGAN. It is applied to infrared and color visible image fusion. First, three different receptive fields are used to extract multiscale and multilevel deep features of multimodal images on three channels, so as to obtain significant features of source images from multiview and multiperspective in a more comprehensive way. Second, a feature interaction module is introduced in the encoder to realize the information interaction and information prefusion among the three feature channels. Third, the multilevel features fused in the encoder are cascaded with the deep features in the decoder to enhance feature transfer and feature reuse. Finally, this experiment adopts a dual discriminator adversarial network structure to keep the balance of infrared image intensity and visible image texture preserved by fusion results. Qualitative and quantitative experimental analyses are carried out on two public datasets of gray-scale infrared and visible images, and one dataset of infrared and color visible images. The experimental results prove that the proposed MrFDDGAN algorithm has a better subjective visual effect and objective performance than the existing state-of-the-art fusion methods.
引用
收藏
页数:28
相关论文
共 49 条
[1]   A new image quality metric for image fusion: The sum of the correlations of differences [J].
Aslantas, V. ;
Bendes, E. .
AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2015, 69 (12) :160-166
[2]  
Deshmukh M., 2010, International Journal of Image Processing (IJIP), V4, P484
[3]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965
[4]   Infrared and Visible Image Fusion Method Based on ResNet in a Nonsubsampled Contourlet Transform Domain [J].
Gao, Ce ;
Qi, Donghao ;
Zhang, Yanchao ;
Song, Congcong ;
Yu, Yi .
IEEE ACCESS, 2021, 9 :91883-91895
[5]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[6]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135
[7]   Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1026-1034
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]   A fully convolutional two-stream fusion network for interactive image segmentation [J].
Hu, Yang ;
Soltoggio, Andrea ;
Lock, Russell ;
Carter, Steve .
NEURAL NETWORKS, 2019, 109 :31-42
[10]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269