Infrared and visible image fusion via gradientlet filter

被引:83
作者
Ma, Jiayi [1 ]
Zhou, Yi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Fuzzy gradient threshold function; Gradientlet filter; Saliency map; Infrared; MULTISCALE TRANSFORM; CONTOURLET TRANSFORM; FRAMEWORK;
D O I
10.1016/j.cviu.2020.103016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose an image filter based on fuzzy gradient threshold function and global optimization, termed as gradientlet filter, from the perspective of luminance and gradient separation. It can remove small gradient textures and noise while maintaining the overall brightness and edge gradients of an image. Based on gradientlet filter and image saliency, we further put forward a new method for infrared and visible image fusion, which can overcome the challenges of low contrast, edge blurring and noise existing in traditional fused images. First, the gradientlet filter is used to decompose source images into approximate layers and residual layers, where the former reflects the overall brightness of source images without edge blurring and noise, and the latter reflects the small gradient texture and noise of source images. Second, according to the characteristics of the approximate and residual layers, we propose contrast and gradient saliency maps and construct corresponding weight matrices. Finally, the fused image is obtained by fusion and reconstruction based on previously obtained sub-images and weight matrices. Extensive experiments on publicly available databases demonstrate the advantages of our method over state-of-the-art methods in terms of maintaining image contrast, improving target saliency, preventing edge blurring, and reducing noise.
引用
收藏
页数:12
相关论文
共 48 条
[21]   Simultaneous image fusion and denoising with adaptive sparse representation [J].
Liu, Yu ;
Wang, Zengfu .
IET IMAGE PROCESSING, 2015, 9 (05) :347-357
[22]   Deep learning for pixel-level image fusion: Recent advances and future prospects [J].
Liu, Yu ;
Chen, Xun ;
Wang, Zengfu ;
Wang, Z. Jane ;
Ward, Rabab K. ;
Wang, Xuesong .
INFORMATION FUSION, 2018, 42 :158-173
[23]   Multi-focus image fusion with a deep convolutional neural network [J].
Liu, Yu ;
Chen, Xun ;
Peng, Hu ;
Wang, Zengfu .
INFORMATION FUSION, 2017, 36 :191-207
[24]   A general framework for image fusion based on multi-scale transform and sparse representation [J].
Liu, Yu ;
Liu, Shuping ;
Wang, Zengfu .
INFORMATION FUSION, 2015, 24 :147-164
[25]   A feature-based metric for the quantitative evaluation of pixel-level image fusion [J].
Liu, Zheng ;
Forsyth, David S. ;
Laganiere, Robert .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2008, 109 (01) :56-68
[26]   DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion [J].
Ma, Jiayi ;
Xu, Han ;
Jiang, Junjun ;
Mei, Xiaoguang ;
Zhang, Xiao-Ping .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4980-4995
[27]   Infrared and visible image fusion via detail preserving adversarial learning [J].
Ma, Jiayi ;
Liang, Pengwei ;
Yu, Wei ;
Chen, Chen ;
Guo, Xiaojie ;
Wu, Jia ;
Jiang, Junjun .
INFORMATION FUSION, 2020, 54 :85-98
[28]   FusionGAN: A generative adversarial network for infrared and visible image fusion [J].
Ma, Jiayi ;
Yu, Wei ;
Liang, Pengwei ;
Li, Chang ;
Jiang, Junjun .
INFORMATION FUSION, 2019, 48 :11-26
[29]   Infrared and visible image fusion methods and applications: A survey [J].
Ma, Jiayi ;
Ma, Yong ;
Li, Chang .
INFORMATION FUSION, 2019, 45 :153-178
[30]   Infrared and visible image fusion via gradient transfer and total variation minimization [J].
Ma, Jiayi ;
Chen, Chen ;
Li, Chang ;
Huang, Jun .
INFORMATION FUSION, 2016, 31 :100-109