Multi-scale saliency measure and orthogonal space for visible and infrared image fusion

被引:10
作者
Liu, Yaochen [1 ]
Dong, Lili [1 ]
Ren, Wei [1 ]
Xu, Wenhai [1 ]
机构
[1] Dalian Maritime Univ, Dalian 116026, Peoples R China
关键词
Image fusion; Saliency measure; Orthogonal space; INFORMATION; SCHEME; FOCUS;
D O I
10.1016/j.infrared.2021.103916
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
For infrared and visible image fusion technology, it has always been a challenge to effectively select useful information from the source image and integrate them because the imaging principle of infrared and visible images are widely different. To solve this problem, a novel infrared and visible image fusion algorithm are proposed, which includes the following contributions: (i) an infrared visual saliency extracting scheme using global measurement are presented, (ii) a visible visual saliency measure scheme by a local measurement strategy are proposed, and (iii) a fusion rule based on orthogonal space is designed to combine the extracted saliency maps. Specifically, in order to make humans pay attention to infrared targets, coarse-scale decomposition is performed. Then a global measurement strategy is utilized to get saliency maps. In addition, since visible images have rich textures, fine-scale decomposition can make the visual system pay attention to tiny details. Next, the visual saliency is measured by a local measurement strategy. Different from the general fusion rules, the orthogonal space is constructed to integrate the saliency maps, which can remove the correlation of saliency maps to avoid mutual interference. Experiments on public databases demonstrate that the fusion results of the proposed fusion algorithm are better than other comparison algorithms in qualitative and quantitative assessment.
引用
收藏
页数:11
相关论文
共 35 条
[1]   A novel image fusion framework for night-vision navigation and surveillance [J].
Bhatnagar, Gaurav ;
Liu, Zheng .
SIGNAL IMAGE AND VIDEO PROCESSING, 2015, 9 :165-175
[2]   Visual attention: The past 25 years [J].
Carrasco, Marisa .
VISION RESEARCH, 2011, 51 (13) :1484-1525
[3]   Infrared and visual image fusion using LNSST and an adaptive dual-channel PCNN with triple-linking strength [J].
Cheng, Boyang ;
Jin, Longxu ;
Li, Guoning .
NEUROCOMPUTING, 2018, 310 :135-147
[4]   Image fusion metric based on mutual information and Tsallis entropy [J].
Cvejic, N. ;
Canagarajah, C. N. ;
Bull, D. R. .
ELECTRONICS LETTERS, 2006, 42 (11) :626-627
[5]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135
[6]   VIF-Net: An Unsupervised Framework for Infrared and Visible Image Fusion [J].
Hou, Ruichao ;
Zhou, Dongming ;
Nie, Rencan ;
Liu, Dong ;
Xiong, Lei ;
Guo, Yanbu ;
Yu, Chuanbo .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2020, 6 :640-651
[7]   The multiscale directional bilateral filter and its application to multisensor image fusion [J].
Hu, Jianwen ;
Li, Shutao .
INFORMATION FUSION, 2012, 13 (03) :196-206
[8]   Multi-scale image fusion through rolling guidance filter [J].
Jian, Lihua ;
Yang, Xiaomin ;
Zhou, Zhili ;
Zhou, Kai ;
Liu, Kai .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 83 :310-325
[9]   A survey of infrared and visual image fusion methods [J].
Jin, Xin ;
Jiang, Qian ;
Yao, Shaowen ;
Zhou, Dongming ;
Nie, Rencan ;
Hai, Jinjin ;
He, Kangjian .
INFRARED PHYSICS & TECHNOLOGY, 2017, 85 :478-501
[10]   MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion [J].
Li, Hui ;
Wu, Xiao-Jun ;
Kittler, Josef .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4733-4746