A Novel Image Fusion Framework Based on Sparse Representation and Pulse Coupled Neural Network

被引:17
作者
Yin, Li [1 ,2 ]
Zheng, Mingyao [3 ,4 ,5 ]
Qi, Guanqiu [6 ]
Zhu, Zhiqin [5 ]
Jin, Fu [3 ,4 ]
Sim, Jaesung [7 ]
机构
[1] Chongqing Univ Canc Hosp, Chongqing Key Lab Translat Res Canc Metastasis &, Chongqing 400030, Peoples R China
[2] Chongqing Canc Hosp, Chongqing Canc Inst, Chongqing 400030, Peoples R China
[3] Chongqing Univ, Canc Hosp, Key Lab Biorheol Sci & Technol, Minist Educ, Chongqing 400044, Peoples R China
[4] Chongqing Canc Hosp, Chongqing Canc Inst, Chongqing 400044, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Coll Automat, Chongqing 400065, Peoples R China
[6] Buffalo State Coll, Comp Informat Syst Dept, Buffalo, NY 14222 USA
[7] Mansfield Univ Penn, Dept Math & Comp Informat Sci, Mansfield, PA 16933 USA
关键词
Multi-sensor fusion; NSST; PCNN; sparse representation; dictionary learning; image fusion; INFORMATION; PCNN;
D O I
10.1109/ACCESS.2019.2929303
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image fusion techniques are applied to the synthesis of two or more images captured in the same scene to obtain a high-quality image. However, most of the existing fusion algorithms are aimed at single-mode images. To improve the fusion quality of multi-modal images, a novel multi-sensor image fusion framework based on non-subsampled shearlet transform (NSST) is proposed. First, the proposed solution uses NSST to decompose source images into high- and low-frequency components. Then, an improved pulse coupled neural network (PCNN) is proposed to process high-frequency components. Thus, the feature extraction effect of the high-frequency component is meliorated. After that, a sparse representation (SR) based measure, including compact dictionary learning and Max-L1 fusion rule, is designed to enhance the detailed features of the low-frequency component. Finally, the final image is obtained by the reconstruction of high- and low-frequency components via NSST inverse transformation. The proposed method is compared with several existing fusion methods. The experiment results show that the proposed algorithm outperforms other algorithms in both subjective and objective evaluation.
引用
收藏
页码:98290 / 98305
页数:16
相关论文
共 51 条
[1]  
Aishwarya N, 2016, PROCEEDINGS OF THE 2016 IEEE INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET), P2377, DOI 10.1109/WiSPNET.2016.7566567
[2]   Multi-Focus Image Fusion Through Gradient-Based Decision Map Construction and Mathematical Morphology [J].
Bai, Xiangzhi ;
Liu, Miaoming ;
Chen, Zhiguo ;
Wang, Peng ;
Zhang, Yu .
IEEE ACCESS, 2016, 4 :4749-4760
[3]   Quadtree-based multi-focus image fusion using a weighted focus-measure [J].
Bai, Xiangzhi ;
Zhang, Yu ;
Zhou, Fugen ;
Xue, Bindang .
INFORMATION FUSION, 2015, 22 :105-118
[4]  
Chen Yin, 2009, P C INF SCI SYST CIS, P518
[5]   A New Automatic Parameter Setting Method of a Simplified PCNN for Image Segmentation [J].
Chen, Yuli ;
Park, Sung-Kee ;
Ma, Yide ;
Ala, Rajeshkanna .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (06) :880-892
[6]   Image fusion metric based on mutual information and Tsallis entropy [J].
Cvejic, N. ;
Canagarajah, C. N. ;
Bull, D. R. .
ELECTRONICS LETTERS, 2006, 42 (11) :626-627
[7]   A comparative study of various pixel-based image fusion techniques as applied to an urban environment [J].
Dahiya, Susheela ;
Garg, Pradeep Kumar ;
Jat, Mahesh K. .
INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION, 2013, 4 (03) :197-213
[8]   NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency [J].
Das, Sudeb ;
Kundu, Malay Kumar .
MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2012, 50 (10) :1105-1114
[9]   Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure [J].
De, Ishita ;
Chanda, Bhabatosh .
INFORMATION FUSION, 2013, 14 (02) :136-146
[10]   Sparse Representation Based Image Interpolation With Nonlocal Autoregressive Modeling [J].
Dong, Weisheng ;
Zhang, Lei ;
Lukac, Rastislav ;
Shi, Guangming .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (04) :1382-1394