A novel multi-modality image fusion method based on image decomposition and sparse representation

被引:322
作者
Zhu, Zhiqin [1 ,2 ]
Yin, Hongpeng [1 ,2 ]
Chai, Yi [2 ]
Li, Yanxia [1 ,2 ]
Qi, Guanqiu [2 ,3 ]
机构
[1] Chongqing Univ, Minist Educ, Key Lab Dependable Serv Comp Cyber Phys Soc, Chongqing 400030, Peoples R China
[2] Chongqing Univ, Coll Automat, Chongqing 400044, Peoples R China
[3] Arizona State Univ, Sch Comp Informat & Decis Syst Engn, Tempe, AZ 85287 USA
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Sparse representation; Dictionary construction; Multi-modality image fusion; Cartoon-texture decomposition; OBJECT RECOGNITION; QUALITY; CLASSIFICATION; INFORMATION; TRANSFORM; ALGORITHM; MODEL;
D O I
10.1016/j.ins.2017.09.010
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-modality image fusion is an effective technique to fuse the complementary information from multi-modality images into an integrated image. The additional information can not only enhance visibility to human eyes, but also mutually complement the limitations of each image. To preserve the structure information and perform the detailed information of source images, a novel image fusion scheme based on image cartoon-texture decomposition and sparse representation is proposed. In proposed image fusion method, source multi-modality images are decomposed into cartoon and texture components. For cartoon components a proper spatial-based method is presented for morphological structure preservation. An energy based fusion rule is used to preserve structure information of each source image. For texture components, a sparse-representation based method is proposed. A dictionary with strong representation ability is trained for the proposed sparse representation based fusion method. Finally, according to the texture enhancement fusion rule, the fused cartoon and texture components are integrated. The experimentation results have clearly shown that the proposed method outperforms the state-of-art methods, in terms of visual and quantitative evaluations. (C) 2017 Elsevier Inc. All rights reserved.
引用
收藏
页码:516 / 529
页数:14
相关论文
共 49 条
[1]   Ensemble extreme learning machine and sparse representation classification [J].
Cao, Jiuwen ;
Hao, Jiaoping ;
Lai, Xiaoping ;
Vong, Chi-Man ;
Luo, Minxia .
JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2016, 353 (17) :4526-4541
[2]  
Chang X., 2015, CONVEX FORMULATION S, V4, P2532
[3]   Semisupervised Feature Analysis by Mining Correlations Among Multiple Tasks [J].
Chang, Xiaojun ;
Yang, Yi .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (10) :2294-2305
[4]   Convex Sparse PCA for Unsupervised Feature Learning [J].
Chang, Xiaojun ;
Nie, Feiping ;
Yang, Yi ;
Zhang, Chengqi ;
Huang, Heng .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2016, 11 (01)
[5]   A new automated quality assessment algorithm for image fusion [J].
Chen, Yin ;
Blum, Rick S. .
IMAGE AND VISION COMPUTING, 2009, 27 (10) :1421-1432
[6]   Remote estimation of crop gross primary production with Landsat data [J].
Gitelson, Anatoly A. ;
Peng, Yi ;
Masek, Jeffery G. ;
Rundquist, Donald C. ;
Verma, Shashi ;
Suyker, Andrew ;
Baker, John M. ;
Hatfield, Jerry L. ;
Meyers, Tilden .
REMOTE SENSING OF ENVIRONMENT, 2012, 121 :404-414
[7]   Soil organic carbon prediction by hyperspectral remote sensing and field vis-NIR spectroscopy: An Australian case study [J].
Gomez, Cecile ;
Rossel, Raphael A. Viscarra ;
McBratney, Alex B. .
GEODERMA, 2008, 146 (3-4) :403-411
[8]   Multiresolution histograms and their use for recognition [J].
Hadjidemetriou, E ;
Grossberg, MD ;
Nayar, SK .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2004, 26 (07) :831-847
[9]   Joint patch clustering-based dictionary learning for multimodal image fusion [J].
Kim, Minjae ;
Han, David K. ;
Ko, Hanseok .
INFORMATION FUSION, 2016, 27 :198-214
[10]   Pixel- and region-based image fusion with complex wavelets [J].
Lewis, John J. ;
O'Callaghan, Robert J. ;
Nikolov, Stavri G. ;
Bull, David R. ;
Canagarajah, Nishan .
INFORMATION FUSION, 2007, 8 (02) :119-130