Infrared and Visible Image Fusion Based on Mask and Cross-Dynamic Fusion

被引:3
作者
Fu, Qiang [1 ]
Fu, Hanxiang [1 ]
Wu, Yuezhou [1 ]
机构
[1] Civil Aviat Flight Univ China, Sch Comp Sci, Guanghan 618307, Peoples R China
关键词
dynamic convolution; image fusion; infrared image; mask; visible image; GENERATIVE ADVERSARIAL NETWORK; PERFORMANCE; FRAMEWORK; WAVELET; NEST;
D O I
10.3390/electronics12204342
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Both single infrared and visible images have respective limitations. Fusion technology has been developed to conquer these restrictions. It is designed to generate a fused image with infrared information and texture details. Most traditional fusion methods use hand-designed fusion strategies, but some are too rough and have limited fusion performance. Recently, some researchers have proposed fusion methods based on deep learning, but some early fusion networks cannot adaptively fuse images due to unreasonable design. Therefore, we propose a mask and cross-dynamic fusion-based network called MCDFN. This network adaptively preserves the salient features of infrared images and the texture details of visible images through an end-to-end fusion process. Specifically, we designed a two-stage fusion network. In the first stage, we train the autoencoder network so that the encoder and decoder learn feature extraction and reconstruction capabilities. In the second stage, the autoencoder is fixed, and we employ a fusion strategy combining mask and cross-dynamic fusion to train the entire fusion network. This strategy is conducive to the adaptive fusion of image information between infrared images and visible images in multiple dimensions. On the public TNO dataset and the RoadScene dataset, we selected nine different fusion methods to compare with our proposed method. Experimental results show that our proposed fusion method achieves good results on both datasets.
引用
收藏
页数:22
相关论文
共 48 条
[21]   Deep learning for pixel-level image fusion: Recent advances and future prospects [J].
Liu, Yu ;
Chen, Xun ;
Wang, Zengfu ;
Wang, Z. Jane ;
Ward, Rabab K. ;
Wang, Xuesong .
INFORMATION FUSION, 2018, 42 :158-173
[22]  
Liu Y, 2017, 2017 20TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), P1070
[23]   Image Fusion With Convolutional Sparse Representation [J].
Liu, Yu ;
Chen, Xun ;
Ward, Rabab K. ;
Wang, Z. Jane .
IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (12) :1882-1886
[24]   A general framework for image fusion based on multi-scale transform and sparse representation [J].
Liu, Yu ;
Liu, Shuping ;
Wang, Zengfu .
INFORMATION FUSION, 2015, 24 :147-164
[25]   STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection [J].
Ma, Jiayi ;
Tang, Linfeng ;
Xu, Meilong ;
Zhang, Hao ;
Xiao, Guobao .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[26]   DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion [J].
Ma, Jiayi ;
Xu, Han ;
Jiang, Junjun ;
Mei, Xiaoguang ;
Zhang, Xiao-Ping .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4980-4995
[27]   FusionGAN: A generative adversarial network for infrared and visible image fusion [J].
Ma, Jiayi ;
Yu, Wei ;
Liang, Pengwei ;
Li, Chang ;
Jiang, Junjun .
INFORMATION FUSION, 2019, 48 :11-26
[28]   Infrared and Visible Image Fusion Technology and Application: A Review [J].
Ma, Weihong ;
Wang, Kun ;
Li, Jiawei ;
Yang, Simon X. ;
Li, Junfei ;
Song, Lepeng ;
Li, Qifeng .
SENSORS, 2023, 23 (02)
[29]   A wavelet-based image fusion tutorial [J].
Pajares, G ;
de la Cruz, JM .
PATTERN RECOGNITION, 2004, 37 (09) :1855-1872
[30]   DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs [J].
Prabhakar, K. Ram ;
Srikar, V. Sai ;
Babu, R. Venkatesh .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4724-4732