A novel fusion method for infrared and visible images under poor illumination conditions

被引:1
作者
Li, Zhijian [1 ,2 ]
Yang, Fengbao [1 ]
Ji, Linna [1 ]
机构
[1] North Univ China, Sch Informat & Commun Engn, Taiyuan 030051, Peoples R China
[2] Shanxi Coll Technol, Shuozhou 036000, Peoples R China
关键词
Image fusion; Infrared image; Visible image; Poor illumination; Support value; PERFORMANCE; DECOMPOSITION; ENHANCEMENT;
D O I
10.1016/j.infrared.2023.104773
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
Most infrared and visible image fusion methods are designed based on the fact that visible images have rich scene information and more details like edges and textures than infrared images, while infrared images have prominent thermal target information. However, under poor illumination conditions, most areas of visible images are dark, may contain much noise, and lack the corresponding detail information compared with infrared images. As a result, fused images of those methods suffer from information loss, low contrast, and non-obvious targets. To solve this problem, we propose a novel fusion method. Firstly, an improved rolling guidance filter, named RFRGF, is proposed to decompose source images into small-scale detail, large-scale detail, and base layers. Secondly, for the fusion of small-scale detail layers, a new nonlinear function-based rule is proposed to transfer more texture information from source images under poor illumination conditions to the fused images. For the fusion of large-scale detail layers, a novel fusion rule based on the weighted sum of support values (WSSV) is constructed to retain details effectively. Then, for the fusion of base layers, the rule based on the visual saliency map (VSM) is adopted to ensure high contrast and a well overall look of the fused image. Moreover, BIMEF and morphological bright and dark details (MBD) are used to further enhance the fused image's contrast and details, making targets more obvious. Specifically, BIMEF is adopted to enhance the visible image before decomposition. The MBD obtained by two selective rules based on morphological top- and bottom-transformations (MTB) is used to enhance the base layer. Experimental results show that the fusion performance of the proposed method is better than other methods (including some state-of-the-art methods), especially in artifact suppression, information retention, contrast improvement, and target enhancement.
引用
收藏
页数:17
相关论文
共 57 条
[1]   Transform coefficient histogram-based image enhancement algorithms using contrast entropy [J].
Agaian, Sos S. ;
Silver, Blair ;
Panetta, Karen A. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2007, 16 (03) :741-758
[2]   Visible and infrared image fusion using DTCWT and adaptive combined clustered dictionary [J].
Aishwarya, N. ;
Thangammal, C. Bennila .
INFRARED PHYSICS & TECHNOLOGY, 2018, 93 :300-309
[3]  
[Anonymous], About us
[4]  
Broomhead D. S., 1988, Complex Systems, V2, P321
[5]   A human perception inspired quality metric for image fusion based on regional information [J].
Chen, Hao ;
Varshney, Pramod K. .
INFORMATION FUSION, 2007, 8 (02) :193-207
[6]   A saliency-based multiscale approach for infrared and visible image fusion [J].
Chen, Jun ;
Wu, Kangle ;
Cheng, Zhuo ;
Luo, Linbo .
SIGNAL PROCESSING, 2021, 182
[7]   Infrared and visible image fusion based on target-enhanced multiscale transform decomposition [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Mei, Xiaoguang ;
Ma, Jiayi .
INFORMATION SCIENCES, 2020, 508 :64-78
[8]   Gray-level grouping (GLG): An automatic method for optimized image contrast enhancement - Part I: The basic method [J].
Chen, ZhiYu ;
Abidi, Besma R. ;
Page, David L. ;
Abidi, Mongi A. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (08) :2290-2302
[9]   Image denoising by sparse 3-D transform-domain collaborative filtering [J].
Dabov, Kostadin ;
Foi, Alessandro ;
Katkovnik, Vladimir ;
Egiazarian, Karen .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2007, 16 (08) :2080-2095
[10]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965