Multimodal medical image fusion using convolutional neural network and extreme learning machine

被引:9
作者
Kong, Weiwei [1 ,2 ,3 ]
Li, Chi [1 ,2 ,3 ]
Lei, Yang [4 ]
机构
[1] Xian Univ Posts & Telecommun, Sch Comp Sci & Technol, Xian, Peoples R China
[2] Shaanxi Key Lab Network Data Anal & Intelligent Pr, Xian, Peoples R China
[3] Xian Key Lab Big Data & Intelligent Comp, Xian, Peoples R China
[4] Engn Univ PAP, Coll Cryptog Engn, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
image fusion; modality; multimodal medical image; convolutional neural network; extreme learning machine; FILTER; ALGORITHM; MODEL;
D O I
10.3389/fnbot.2022.1050981
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of multimodal medical imaging technology greatly increases the accuracy of clinical diagnosis and etiological analysis. Nevertheless, each medical imaging modal unavoidably has its own limitations, so the fusion of multimodal medical images may become an effective solution. In this paper, a novel fusion method on the multimodal medical images exploiting convolutional neural network (CNN) and extreme learning machine (ELM) is proposed. As a typical representative in deep learning, CNN has been gaining more and more popularity in the field of image processing. However, CNN often suffers from several drawbacks, such as high computational costs and intensive human interventions. To this end, the model of convolutional extreme learning machine (CELM) is constructed by incorporating ELM into the traditional CNN model. CELM serves as an important tool to extract and capture the features of the source images from a variety of different angles. The final fused image can be obtained by integrating the significant features together. Experimental results indicate that, the proposed method is not only helpful to enhance the accuracy of the lesion detection and localization, but also superior to the current state-of-the-art ones in terms of both subjective visual performance and objective criteria.
引用
收藏
页数:15
相关论文
共 58 条
  • [51] Zagoruyko S, 2015, PROC CVPR IEEE, P4353, DOI 10.1109/CVPR.2015.7299064
  • [52] Multi-Modality Image Fusion in Adaptive-Parameters SPCNN Based on Inherent Characteristics of Image
    Zhang, Lixia
    Zeng, Guangping
    Wei, Jinjin
    Xuan, Zhaocheng
    [J]. IEEE SENSORS JOURNAL, 2020, 20 (20) : 11820 - 11827
  • [53] A multi-modal image fusion framework based on guided filter and sparse representation
    Zhang, Shuai
    Huang, Fuyu
    Liu, Bingqi
    Li, Gang
    Chen, Yichao
    Chen, Yudan
    Zhou, Bing
    Wu, Dongsheng
    [J]. OPTICS AND LASERS IN ENGINEERING, 2021, 137
  • [54] Medical Image Fusion and Denoising with Alternating Sequential Filter and Adaptive Fractional Order Total Variation
    Zhao, Wenda
    Lu, Huchuan
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2017, 66 (09) : 2283 - 2294
  • [55] A new metric based on extended spatial frequency and its application to DWT based fusion algorithms
    Zheng, Yufeng
    Essock, Edward A.
    Hansen, Bruce C.
    Haun, Andrew M.
    [J]. INFORMATION FUSION, 2007, 8 (02) : 177 - 192
  • [56] HID: The Hybrid Image Decomposition Model for MRI and CT Fusion
    Zhu, Rui
    Li, Xiongfei
    Zhang, Xiaoli
    Wang, Jing
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (02) : 727 - 739
  • [57] A Phase Congruency and Local Laplacian Energy Based Multi-Modality Medical Image Fusion Method in NSCT Domain
    Zhu, Zhiqin
    Zheng, Mingyao
    Qi, Guanqiu
    Wang, Di
    Xiang, Yan
    [J]. IEEE ACCESS, 2019, 7 : 20811 - 20824
  • [58] A novel dictionary learning approach for multi-modality medical image fusion
    Zhu, Zhiqin
    Chai, Yi
    Yin, Hongpeng
    Li, Yanxia
    Liu, Zhaodong
    [J]. NEUROCOMPUTING, 2016, 214 : 471 - 482