MMFGAN: A novel multimodal brain medical image fusion based on the improvement of generative adversarial network

被引:20
作者
Guo, Kai [1 ,2 ]
Hu, Xiaohan [3 ]
Li, Xiongfei [1 ,2 ]
机构
[1] Jilin Univ, Key Lab Symbol Computat & Knowledge Engn, Minist Educ, Changchun 130012, Peoples R China
[2] Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Peoples R China
[3] First Hosp Jilin Univ, Dept Radiol, Changchun 130021, Peoples R China
基金
中国国家自然科学基金; 产业技术研究与开发资金项目;
关键词
Medical image fusion; Deep learning; Residual attention mechanism block; Concat detail texture block; Dual discriminator; NEURAL-NETWORK; TRANSFORM;
D O I
10.1007/s11042-021-11822-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, the multimodal medical imaging assisted diagnosis and treatment technology has developed rapidly. In brain disease diagnosis, CT-SPECT, MRI-PET and MRI-SPECT fusion images are more favored by brain doctors because they contain both soft tissue structure information and organ metabolism information. Most of the previous medical image fusion algorithms are the migration of other types of image fusion methods and such operations often lose the features of the medical image itself. This paper proposes a multimodal medical image fusion model based on the residual attention mechanism of the generative adversarial network. In the design of the generator, we construct the residual attention mechanism block and the concat detail texture block. After source images are concatenated to a matrix , the matrix is put into two blocks at the same time to extract information such as size, shape, spatial location and texture details. The obtained features are put into the merge block to reconstruct the image. The obtained reconstructed image and source images are respectively put into two discriminators for correction to obtain the final fused image. The model has been experimented on the images of three databases and achieved good fusion results. Qualitative and quantitative evaluations prove that the model is superior to other comparison algorithms in terms of image fusion quality and detail information retention.
引用
收藏
页码:5889 / 5927
页数:39
相关论文
共 37 条
  • [1] Bhardwaj J., 2021, International Conference on Innovative Computing and Communications: Proceedings of ICICC 2020, Volume 1, P1047
  • [2] S3D-UNet: Separable 3D U-Net for Brain Tumor Segmentation
    Chen, Wei
    Liu, Boqiang
    Peng, Suting
    Sun, Jiawei
    Qiao, Xu
    [J]. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2018, PT II, 2019, 11384 : 358 - 368
  • [3] Raman Spectroscopy and Imaging for Cancer Diagnosis
    Cui, Sishan
    Zhang, Shuo
    Yue, Shuhua
    [J]. JOURNAL OF HEALTHCARE ENGINEERING, 2018, 2018
  • [4] Gao T., 2020, Int J Secur Privacy Pervasive Comput, V12, P17, DOI DOI 10.4018/IJSPPC.2020040102
  • [5] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [6] Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space
    Guo, Kai
    Li, Xiongfei
    Zang, Hongrui
    Fan, Tiehu
    [J]. ENTROPY, 2020, 22 (12) : 1 - 36
  • [7] Haghighat M, 2014, I C APPL INF COMM TE, P424
  • [8] A new image fusion performance metric based on visual information fidelity
    Han, Yu
    Cai, Yunze
    Cao, Yin
    Xu, Xiaoming
    [J]. INFORMATION FUSION, 2013, 14 (02) : 127 - 135
  • [9] Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain
    Hermessi, Haithem
    Mourali, Olfa
    Zagrouba, Ezzeddine
    [J]. NEURAL COMPUTING & APPLICATIONS, 2018, 30 (07) : 2029 - 2045
  • [10] Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model
    Hou, Ruichao
    Zhou, Dongming
    Nie, Rencan
    Liu, Dong
    Ruan, Xiaoli
    [J]. MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2019, 57 (04) : 887 - 900