Infrared and visible images fusion by using sparse representation and guided filter

被引:17
作者
Li, Qilei [1 ,2 ]
Wu, Wei [1 ,2 ]
Lu, Lu [1 ]
Li, Zuoyong [2 ]
Ahmad, Awais [3 ]
Jeon, Gwanggil [4 ,5 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610064, Sichuan, Peoples R China
[2] Minjiang Univ, Fujian Prov Key Lab Informat Proc & Intelligent C, Fuzhou, Fujian, Peoples R China
[3] Bahria Univ, Dept Comp Sci, Islamabad, Pakistan
[4] Xidian Univ, Sch Elect Engn, Xian, Shaanxi, Peoples R China
[5] Incheon Natl Univ, Dept Embedded Syst Engn, Incheon, South Korea
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Guided filter (GF); image fusion; sparse representation (SR); weight map; MULTISENSOR IMAGE; MULTISCALE; LEVEL; SUPERRESOLUTION; TRANSFORM;
D O I
10.1080/15472450.2019.1643725
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
Infrared and visible images play an important role in transportation systems since they can monitor traffic conditions around the clock. However, visible images are susceptible to the imaging environments, and infrared images are not rich enough in detail. The infrared and visible images fusion techniques can fuze these two different modal images into a single image with more useful information. In this paper, we propose an effective infrared and visible images fusion method for traffic systems. The weight maps are measured by utilizing the sparse coefficients. The next is to decompose the infrared and visible pair into high-frequency layers (HFLs) and low-frequency layers (LFLs). Since the two layers contain different structures and texture information, to extract the representative component, the guided filter is utilized to optimize weight maps in accordance with the different characteristic of the infrared and visible pairs. The final step is to reconstruct the two-scale layers according to the weight maps. Experimental results demonstrate our method outperforms other popular approaches in terms of subjective perception and objective metrics.
引用
收藏
页码:254 / 263
页数:10
相关论文
共 39 条
[1]   Learning to detect objects in images via a sparse, part-based representation [J].
Agarwal, S ;
Awan, A ;
Roth, D .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2004, 26 (11) :1475-1490
[2]  
Agarwal S, 2002, LECT NOTES COMPUT SC, V2353, P113
[3]  
[Anonymous], 2000, MIL TECH COUR, DOI 10.5937/vojtehg0802181B
[4]   THE LAPLACIAN PYRAMID AS A COMPACT IMAGE CODE [J].
BURT, PJ ;
ADELSON, EH .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1983, 31 (04) :532-540
[5]   Effective detection by using visible and infrared mages of targets for Unmanned Surface Vehicles [J].
Dai, Xuefeng ;
Zhao, Jianqi ;
Li, Dahui .
AUTOMATIKA, 2018, 59 (3-4) :323-330
[6]   A non-reference image fusion metric based on mutual information of image features [J].
Haghighat, Mohammad Bagher Akbari ;
Aghagolzadeh, Ali ;
Seyedarabi, Hadi .
COMPUTERS & ELECTRICAL ENGINEERING, 2011, 37 (05) :744-756
[7]  
He KM, 2010, LECT NOTES COMPUT SC, V6311, P1
[8]   Artificial intelligence for traffic signal control based solely on video images [J].
Jeon, Hyunjeong ;
Lee, Jincheol ;
Sohn, Keemin .
JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 22 (05) :433-445
[9]   Pixel- and region-based image fusion with complex wavelets [J].
Lewis, John J. ;
O'Callaghan, Robert J. ;
Nikolov, Stavri G. ;
Bull, David R. ;
Canagarajah, Nishan .
INFORMATION FUSION, 2007, 8 (02) :119-130
[10]   MULTISENSOR IMAGE FUSION USING THE WAVELET TRANSFORM [J].
LI, H ;
MANJUNATH, BS ;
MITRA, SK .
GRAPHICAL MODELS AND IMAGE PROCESSING, 1995, 57 (03) :235-245