Night-vision Image Fusion Based on Intensity Transformation and Two-scale Decomposition

被引:0
作者
Zhu H. [1 ]
Liu Y. [1 ]
Zhang W. [2 ,3 ]
机构
[1] School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun
[2] Photoelectric Engineering College, Changchun University of Science and Technology, Changchun
[3] Academy of Opto-electronics, Chinese Academy of Sciences, Beijing
来源
Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology | 2019年 / 41卷 / 03期
关键词
Image fusion; Intensity transformation; Two-scale decomposition; Visual saliency detection;
D O I
10.11999/JEITdzyxxxb-41-3-640
中图分类号
学科分类号
摘要
In order to achieve more suitable night vision fusion images for human perception, a novel night-vision image fusion algorithm is proposed based on intensity transformation and two-scale decomposition. Firstly, the pixel value from the infrared image is used as the exponential factor to achieve intensity transformation of the visible image, so that the task of infrared-visible image fusion can be transformed into the merging of homogeneous images. Secondly, the enhanced result and the original visible image are decomposed into base and detail layers through a simple average filter. Thirdly, the detail layers are fused by the visual weight maps. Finally, the fused image is reconstructed by synthesizing these results. The fused image is more suitable for the visual perception, because the proposed method presents the result in the visual spectrum band. Experimental results show that the proposed method outperforms obviously the other five methods. In addition, the computation time of the proposed method is less than 0.2 s, which meet the real-time requirements. In the fused result, the details of the background are clear while the objects with high temperature variance are highlighted as well. © 2019, Science Press. All right reserved.
引用
收藏
页码:640 / 648
页数:8
相关论文
共 20 条
[1]  
Feng X., Zhang J., Hu K., Et al., The infrared and visible image fusion method based on variational multiscale, Acta Electronica Sinica, 46, 3, pp. 680-687, (2018)
[2]  
Jiang Z., Wu H., Zhou X., Infrared and visible image fusion algorithm based on improved guided filtering and dual-channel spiking cortical model, Acta Optica Sinica, 38, 2, pp. 112-120, (2018)
[3]  
Li J., Zhou D., Yuan S., Et al., Modified image fusion technique to remove defocus noise in optical scanning holography, Optics Communications, 407, 15, pp. 234-238, (2018)
[4]  
Yin X., Ma J., Image fusion method based on entropy rate segmentation and multi-scale decomposition, Laser & Optoelectronics Progress, 55, 1, pp. 1-8, (2018)
[5]  
Li S., Kang X., Hu J., Image fusion with guided filtering, IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 22, 7, pp. 2864-2875, (2013)
[6]  
Lewis J.J., O'Callaghan R.J., Nikolov S.G., Et al., Pixel- and region-based image fusion with complex wavelets, Information Fusion, 8, 2, pp. 119-130, (2007)
[7]  
Kumar B.K.S., Image fusion based on pixel significance using cross bilateral filter, Signal, Image and Video Processing, 9, 5, pp. 1193-1204, (2015)
[8]  
Xie W., Zhou Y., You M., Improved guided image filtering integrated with gradient information, Journal of Image and Graphics, 21, 9, pp. 1119-1126, (2016)
[9]  
Zuo Y., Liu J., Bai G., Et al., Airborne infrared and visible image fusion combined with region segmentation, Sensors, 17, 5, pp. 1-15, (2017)
[10]  
Tao L., Ngo H., Zhang M., Et al., A multisensory image fusion and enhancement system for assisting drivers in poor lighting conditions, Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop, pp. 106-113, (2005)