Medical image fusion method based on dense block and deep convolutional generative adversarial network

被引:0
作者
Cheng Zhao
Tianfu Wang
Baiying Lei
机构
[1] Shenzhen University,School of Biomedical Engineering, Health Science Center
[2] National-Regional Key Technology Engineering Laboratory for Medical Ultrasound,undefined
[3] Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging,undefined
来源
Neural Computing and Applications | 2021年 / 33卷
关键词
Medical image fusion; Deep convolutional GAN; Dense block; Encoder–decoder; Loss function;
D O I
暂无
中图分类号
学科分类号
摘要
Medical image fusion techniques can further improve the accuracy and time efficiency of clinical diagnosis by obtaining comprehensive salient features and detail information from medical images of different modalities. We propose a novel medical image fusion algorithm based on deep convolutional generative adversarial network and dense block models, which is used to generate fusion images with rich information. Specifically, this network architecture integrates two modules: an image generator module based on dense block and encoder–decoder and a discriminator module. In this paper, we use the encoder network to extract the image features, process the features using fusion rule based on the Lmax norm, and use it as the input of the decoder network to obtain the final fusion image. This method can overcome the weaknesses of the active layer measurement by manual design in the traditional methods and can process the information of the intermediate layer according to the dense blocks to avoid the loss of information. Besides, this paper uses detail loss and structural similarity loss to construct the loss function, which is used to improve the extraction ability of target information and edge detail information related to images. Experiments on the public clinical diagnostic medical image dataset show that the proposed algorithm not only has excellent detail preserve characteristics but also can suppress the artificial effects. The experiment results are better than other comparison methods in different types of evaluation.
引用
收藏
页码:6595 / 6610
页数:15
相关论文
共 101 条
[1]  
Bhatnagar G(2013)Directive contrast based multimodal medical image fusion in NSCT domain IEEE Trans Multimed 15 1014-1024
[2]  
Wu QJ(2019)MCFNet: multi-layer concatenation fusion network for medical images fusion IEEE Sens J 19 7107-7119
[3]  
Liu Z(1989)A morphological pyramidal image decomposition Pattern Recognit Lett 9 255-261
[4]  
Liang X(2016)Union Laplacian pyramid with multiple features for medical image fusion Neurocomputing 194 326-339
[5]  
Hu P(1995)Multisensor image fusion using the wavelet transform Gr Models Image Process 57 235-245
[6]  
Zhang L(2006)The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna Int J Appl Earth Observ Geoinform 8 278-288
[7]  
Sun J(2011)Contourlet transform for image fusion using cycle spinning J Syst Eng Electron 22 353-357
[8]  
Yin G(2011)A novel algorithm of image fusion using shearlets Opt Commun 284 1540-1547
[9]  
Toet A(2014)Image fusion based on shearlet transform and regional features AEU Int J Electron Commun 68 471-477
[10]  
Du J(2008)Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain Acta Automatica Sinica 34 1508-1514