A novel dictionary learning approach for multi-modality medical image fusion

被引:113
|
作者
Zhu, Zhiqin [1 ,2 ]
Chai, Yi [2 ,3 ]
Yin, Hongpeng [1 ,2 ]
Li, Yanxia [1 ,2 ]
Liu, Zhaodong [1 ,2 ]
机构
[1] Chongqing Univ, Key Lab Dependable Serv Comp Cyber Phys Soc, Minist Educ, Chongqing 400030, Peoples R China
[2] Chongqing Univ, Coll Automat, Chongqing 400044, Peoples R China
[3] Chongqing Univ, Coll Automat, State Key Lab Power Transmiss Equipment & Syst Se, Chongqing, Peoples R China
基金
中国国家自然科学基金;
关键词
Dictionary learning; Multi-modality medical image fusion; Informative sampling; Local density peaks clustering; SPARSE REPRESENTATION; PERFORMANCE; INFORMATION;
D O I
10.1016/j.neucom.2016.06.036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modality medical image fusion technology can integrate the complementary information of different modality medical images, obtain more precise, reliable and better description of lesions. Dictionary learning based image fusion draws a great attention in researchers and scientists, for its high performance. The standard learning scheme uses entire image for dictionary learning. However, in medical images, the informative region takes a small proportion of the whole image. Most of the image patches have limited and redundant information. Taking all the image patches for dictionary learning brings lots of unvalued and redundant information, which can influence the medical image fusion quality. In this paper, a novel dictionary learning approach is proposed for image fusion. The proposed approach consists of three steps. Firstly, a novel image patches sampling scheme is proposed to obtain the informative patches. Secondly, a local density peaks based clustering algorithm is conducted to classify the image patches with similar image structure information into several patch groups. Each patch group is trained to a compact sub-dictionary by K-SVD. Finally the sub-dictionaries are combined to a complete, informative and compact dictionary. In this dictionary,only important and useful information which can effectively describe the medical image are selected. To show the efficiency of the proposed dictionary learning approach, the sparse coefficient vectors are estimated by a simultaneous orthogonal matching pursuit (SOMP) algorithm with the trained dictionary, and fused by max-L1 rules. The comparative experimental results and analyses reveal that the proposed method achieves better image fusion quality than existing state-of-the-art methods. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:471 / 482
页数:12
相关论文
共 50 条
  • [31] OmniFuse: A general modality fusion framework for multi-modality learning on low-quality medical data
    Wu, Yixuan
    Chen, Jintai
    Hu, Lianting
    Xu, Hongxia
    Liang, Huiying
    Wu, Jian
    INFORMATION FUSION, 2025, 117
  • [32] TUMOR SEGMENTATION VIA MULTI-MODALITY JOINT DICTIONARY LEARNING
    Wang, Yan
    Yu, Biting
    Wang, Lei
    Zu, Chen
    Luo, Yong
    Wu, Xi
    Yang, Zhipeng
    Zhou, Jiliu
    Zhou, Luping
    2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 1336 - 1339
  • [33] Quasi-Conformal Hybrid Multi-modality Image Registration and its Application to Medical Image Fusion
    Lam, Ka Chun
    Lui, Lok Ming
    ADVANCES IN VISUAL COMPUTING, PT I (ISVC 2015), 2015, 9474 : 809 - 818
  • [34] Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion
    Chang Wang
    Yang Wu
    Yi Yu
    Jun Qiang Zhao
    Machine Vision and Applications, 2022, 33
  • [35] Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion
    Wang, Chang
    Wu, Yang
    Yu, Yi
    Zhao, Jun Qiang
    MACHINE VISION AND APPLICATIONS, 2022, 33 (05)
  • [36] Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond
    Liu, Zhu
    Liu, Jinyuan
    Wu, Guanyao
    Ma, Long
    Fan, Xin
    Liu, Risheng
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1240 - 1248
  • [37] A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation
    Qi, Guanqiu
    Hu, Gang
    Mazur, Neal
    Liang, Huahua
    Haner, Matthew
    COMPUTERS, 2021, 10 (10)
  • [38] Learning based Multi-modality Image and Video Compression
    Lu, Guo
    Zhong, Tianxiong
    Geng, Jing
    Hu, Qiang
    Xu, Dong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6073 - 6082
  • [39] DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion
    Zhao, Zixiang
    Bai, Haowen
    Zhu, Yuanzhi
    Zhang, Jiangshe
    Xu, Shuang
    Zhang, Yulun
    Zhang, Kai
    Meng, Deyu
    Timofte, Radu
    Van Gool, Luc
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 8048 - 8059
  • [40] Fast saliency-aware multi-modality image fusion
    Han, Jungong
    Pauwels, Eric J.
    de Zeeuw, Paul
    NEUROCOMPUTING, 2013, 111 : 70 - 80