Infrared and Visible Image Fusion Based on Visual Saliency and NSCT

被引:0
作者
Fu Z.-Z. [1 ]
Wang X. [1 ]
Li X.-F. [1 ]
Xu J. [1 ]
机构
[1] School of Communication and Information Engineering, University of Electronic Science and Technology of China, Chengdu
来源
Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China | 2017年 / 46卷 / 02期
关键词
Guided filter; Image fusion; NSCT; Saliency;
D O I
10.3969/j.issn.1001-0548.2017.02.007
中图分类号
学科分类号
摘要
An infrared and visible image fusion algorithm is proposed based on visual saliency and non-subsampled contourlet transform (NSCT). At first, the frequency tuned saliency detection method is improved by guided filter and applied to detect the saliency of infrared image. Then the infrared and visible light images are decomposed into low frequency and high-frequency sub-bands by NSCT. Finally the saliency map of infrared image is used to guide the fusion in low frequency sub-band, and the rule of maximum absolute value selection is used for the fusion in high frequency sub-band. Experimental results demonstrate that compared to several other algorithms, the proposed method highlights the IR targets and at the same time makes the fusion images have rich background information, and better visual fusion effects and objective quality evaluations are obtained. © 2017, Editorial Board of Journal of the University of Electronic Science and Technology of China. All right reserved.
引用
收藏
页码:357 / 362
页数:5
相关论文
共 16 条
[1]  
Goshtasby A.A., Nikolov S., Image fusion: Advances in the state of the art, Information Fusion, 8, 2, pp. 114-118, (2007)
[2]  
Toet A., Hogervorst M.A., Nikolov S.G., Et al., Towards cognitive image fusion, Information Fusion, 11, 2, pp. 95-113, (2010)
[3]  
Kong W., Zhang L., Lei Y., Novel fusion method for visible light and infrared images based on NSST-SF-PCNN, Infrared Physics & Technology, 65, pp. 103-112, (2014)
[4]  
Da C.A.L., Zhou J., Do M.N., The nonsubsampled contourlet transform: Theory, design, and applications, IEEE Transactions on Image Processing, 15, 10, pp. 3089-3101, (2006)
[5]  
Borji A., Itti L., State-of-the-art in visual attention modeling, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1, pp. 185-207, (2013)
[6]  
Han J., Pauwels E.J., De Zeeuw P., Fast saliency-aware multi-modality image fusion, Neurocomputing, 111, pp. 70-80, (2013)
[7]  
Liu H., Zhu T., Zhao J., Infrared and visible image fusion based on region of interest detection and nonsubsampled contourlet transform, Journal of Shanghai Jiaotong University (Science), 18, pp. 526-534, (2013)
[8]  
Zhao J., Zhou Q., Chen Y., Et al., Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition, Infrared Physics & Technology, 56, pp. 93-99, (2013)
[9]  
Cui G., Feng H., Xu Z., Et al., Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition, Optics Communications, 341, pp. 199-209, (2015)
[10]  
Hou X., Zhang L., Saliency detection: a spectral residual approach, IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, (2007)