MDFN: Mask deep fusion network for visible and infrared image fusion without reference ground-truth?

被引:32
作者
Guo, Chaoxun [1 ,2 ,3 ]
Fan, Dandan [1 ,2 ,3 ]
Jiang, Zhixing [1 ,2 ]
Zhang, David [1 ,2 ,3 ]
机构
[1] Chinese Univ Hong Kong Shenzhen, Shenzhen 518172, Guangdong, Peoples R China
[2] Robot Soc, Shenzhen Inst Artificial Intelligence, Shenzhen 518172, Guangdong, Peoples R China
[3] Shenzhen Res Inst Big Data, Shenzhen 518172, Guangdong, Peoples R China
关键词
Image fusion; Mask strategy; Deep learning; Weight score estimation; Visible and infrared images; MULTISCALE DECOMPOSITION; PERFORMANCE; FOCUS;
D O I
10.1016/j.eswa.2022.118631
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A single infrared image or visible image cannot clearly present texture details and infrared information of the scene in poor illumination, bad weather, or other complex conditions. Thus, it is necessary to fuse the infrared and visible images into one image. In this paper, we propose a novel deep fusion architecture for fusing visible and infrared images without any reference ground-truth. Different from existing deep image fusion methods which directly output the fused images, a weight score corresponding to each pixel is estimated by our network to determine the contributions of two source images. This strategy transfers the valuable information in source images to the fused image. Considering the salient thermal radiation information in the infrared image, a mask of the infrared image is generated and used to preserve valuable contents in the infrared and visible images for the fused image. Furthermore, a hybrid loss is designed to make the fused image consistent with two source images. On account of the weight estimation, the mask strategy, and the hybrid loss, the images fused by our proposed method jointly maintain the thermal radiation and texture details, achieving state-of-the-art performance compared with existing fusion approaches. Our code is publicly available at https://github.com/NlCxg/MDFN.
引用
收藏
页数:12
相关论文
共 41 条
[1]  
Bavirisetti DP, 2017, 2017 20TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), P701
[2]   Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition [J].
Cui, Guangmang ;
Feng, Huajun ;
Xu, Zhihai ;
Li, Qi ;
Chen, Yueting .
OPTICS COMMUNICATIONS, 2015, 341 :199-209
[3]   A review of remote sensing image fusion methods [J].
Ghassemian, Hassan .
INFORMATION FUSION, 2016, 32 :75-89
[4]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135
[5]   Perceptual Image Fusion Using Wavelets [J].
Hill, Paul ;
Al-Mualla, Mohammed Ebrahim ;
Bull, David .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (03) :1076-1088
[6]   An Adaptive Fusion Algorithm for Visible and Infrared Videos Based on Entropy and the Cumulative Distribution of Gray Levels [J].
Hu, Hai-Miao ;
Wu, Jiawei ;
Li, Bo ;
Guo, Qiang ;
Zheng, Jin .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (12) :2706-2719
[7]   Image fusion method of SAR and infrared image based on Curvelet transform with adaptive weighting [J].
Ji, Xiuxia ;
Zhang, Gong .
MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (17) :17633-17649
[8]   An infrared and visible image fusion method based on multi-scale transformation and norm optimization [J].
Li, Guofa ;
Lin, Yongjie ;
Qu, Xingda .
INFORMATION FUSION, 2021, 71 :109-129
[9]   RFN-Nest: An end-to-end residual fusion network for infrared and visible images [J].
Li, Hui ;
Wu, Xiao-Jun ;
Kittler, Josef .
INFORMATION FUSION, 2021, 73 :72-86
[10]   Infrared and visible image fusion with ResNet and zero-phase component analysis [J].
Li, Hui ;
Wu, Xiao-jun ;
Durrani, Tariq S. .
INFRARED PHYSICS & TECHNOLOGY, 2019, 102