Infrared and visible image fusion with ResNet and zero-phase component analysis

被引:254
作者
Li, Hui [1 ]
Wu, Xiao-jun [1 ]
Durrani, Tariq S. [2 ]
机构
[1] Jiangnan Univ, Jiangsu Prov Engn Lab Pattern Recognit & Computat, Wuxi 214122, Jiangsu, Peoples R China
[2] Univ Strathclyde, Dept Elect & Elect Engn, Glasgow G1 1XW, Lanark, Scotland
关键词
Image fusion; Deep learning; Residual network; Zero-phase component analysis; Infrared image; Visible image; SHEARLET TRANSFORM;
D O I
10.1016/j.infrared.2019.103039
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
In image fusion approaches, feature extraction and processing are key tasks, and the fusion performance is directly affected by the different features and processing methods undertaken. However, most of deep learning-based methods use deep features directly without them. This leads to the fusion performance degradation in some cases. To solve these drawbacks, in our paper, a deep features and zero-phase component analysis (ZCA) based novel fusion framework is proposed. Firstly, the residual network (ResNet) is used to extract deep features from source images. Then ZCA and l(1)-norm are utilized to normalize the deep features and obtain initial weight maps. The final weight maps are obtained by employing a soft-max operation in association with the initial weight maps. Finally, the fused image is reconstructed using a weighted-averaging strategy. Compared with the existing fusion methods, experimental results demonstrate that the proposed framework achieves better performance in both objective assessment and visual quality. The code of our fusion algorithm is available at https://github.com/hli1221/imagefusion_resnet50.
引用
收藏
页数:10
相关论文
共 39 条
[21]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[22]   Infrared and visible image fusion method based on saliency detection in sparse domain [J].
Liu, C. H. ;
Qi, Y. ;
Ding, W. R. .
INFRARED PHYSICS & TECHNOLOGY, 2017, 83 :94-102
[23]  
Liu G., 2010, P 27 INT C MACHINE L, DOI DOI 10.1109/ICDMW.2010.64
[24]   Multi-focus image fusion with a deep convolutional neural network [J].
Liu, Yu ;
Chen, Xun ;
Peng, Hu ;
Wang, Zengfu .
INFORMATION FUSION, 2017, 36 :191-207
[25]   Image Fusion With Convolutional Sparse Representation [J].
Liu, Yu ;
Chen, Xun ;
Ward, Rabab K. ;
Wang, Z. Jane .
IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (12) :1882-1886
[26]   The infrared and visible image fusion algorithm based on target separation and sparse representation [J].
Lu Xiaoqi ;
Zhang Baohua ;
Zhao Ying ;
Liu He ;
Pei Haiquan .
INFRARED PHYSICS & TECHNOLOGY, 2014, 67 :397-407
[27]   Infrared and visible image fusion via gradient transfer and total variation minimization [J].
Ma, Jiayi ;
Chen, Chen ;
Li, Chang ;
Huang, Jun .
INFORMATION FUSION, 2016, 31 :100-109
[28]   Infrared and visible image fusion based on visual saliency map and weighted least square optimization [J].
Ma, Jinlei ;
Zhou, Zhiqiang ;
Wang, Bo ;
Zong, Hua .
INFRARED PHYSICS & TECHNOLOGY, 2017, 82 :8-17
[29]   Perceptual Quality Assessment for Multi-Exposure Image Fusion [J].
Ma, Kede ;
Zeng, Kai ;
Wang, Zhou .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) :3345-3356
[30]   DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs [J].
Prabhakar, K. Ram ;
Srikar, V. Sai ;
Babu, R. Venkatesh .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4724-4732