CT and MRI Image Fusion via Coupled Feature-Learning GAN

被引:1
作者
Mao, Qingyu [1 ]
Zhai, Wenzhe [2 ]
Lei, Xiang [3 ]
Wang, Zenghui [2 ]
Liang, Yongsheng [1 ,4 ]
机构
[1] Shenzhen Univ, Coll Elect & Informat Engn, Shenzhen 518060, Peoples R China
[2] Shandong Univ Technol, Sch Elect & Elect Engn, Zibo 255000, Peoples R China
[3] Zhiyang Innovat Co Ltd, Jinan 250101, Peoples R China
[4] Shenzhen Technol Univ, Coll Big data & Internet, Shenzhen 518118, Peoples R China
基金
中国国家自然科学基金;
关键词
image fusion; CT/MRI image; generative adversarial network; coupled network; GENERATIVE ADVERSARIAL NETWORK; PERFORMANCE;
D O I
10.3390/electronics13173491
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The fusion of multimodal medical images, particularly CT and MRI, is driven by the need to enhance the diagnostic process by providing clinicians with a single, comprehensive image that encapsulates all necessary details. Existing fusion methods often exhibit a bias towards features from one of the source images, making it challenging to simultaneously preserve both structural information and textural details. Designing an effective fusion method that can preserve more discriminative information is therefore crucial. In this work, we propose a Coupled Feature-Learning GAN (CFGAN) to fuse the multimodal medical images into a single informative image. The proposed method establishes an adversarial game between the discriminators and a couple of generators. First, the coupled generators are trained to generate two real-like fused images, which are then used to deceive the two coupled discriminators. Subsequently, the two discriminators are devised to minimize the structural distance to ensure the abundant information in the original source images is well-maintained in the fused image. We further empower the generators to be robust under various scales by constructing a discriminative feature extraction (DFE) block with different dilation rates. Moreover, we introduce a cross-dimension interaction attention (CIA) block to refine the feature representations. The qualitative and quantitative experiments on common benchmarks demonstrate the competitive performance of the CFGAN compared to other state-of-the-art methods.
引用
收藏
页数:18
相关论文
共 47 条
  • [1] Fusion of MRI and CT images using guided image filter and image statistics
    Bavirisetti, Durga Prasad
    Kollu, Vijayakumar
    Gang, Xiao
    Dhuli, Ravindra
    [J]. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2017, 27 (03) : 227 - 237
  • [2] Fusion of PET and MR Brain Images Based on IHS and Log-Gabor Transforms
    Chen, Cheng-I
    [J]. IEEE SENSORS JOURNAL, 2017, 17 (21) : 6995 - 7010
  • [3] An overview of multi-modal medical image fusion
    Du, Jiao
    Li, Weisheng
    Lu, Ke
    Xiao, Bin
    [J]. NEUROCOMPUTING, 2016, 215 : 3 - 20
  • [4] Image quality measures and their performance
    Eskicioglu, AM
    Fisher, PS
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) : 2959 - 2965
  • [5] Image fusion based on generative adversarial network consistent with perception
    Fu, Yu
    Wu, Xiao-Jun
    Durrani, Tariq
    [J]. INFORMATION FUSION, 2021, 72 : 110 - 125
  • [6] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [7] Image fusion: Advances in the state of the art
    Goshtasby, A. Ardeshir
    Nikolov, Stavri
    [J]. INFORMATION FUSION, 2007, 8 (02) : 114 - 118
  • [8] A new image fusion performance metric based on visual information fidelity
    Han, Yu
    Cai, Yunze
    Cao, Yin
    Xu, Xiaoming
    [J]. INFORMATION FUSION, 2013, 14 (02) : 127 - 135
  • [9] A Review of Multimodal Medical Image Fusion Techniques
    Huang, Bing
    Yang, Feng
    Yin, Mengxiao
    Mo, Xiaoying
    Zhong, Cheng
    [J]. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE, 2020, 2020
  • [10] Algebraic Multi-Grid Based Multi-Focus Image Fusion Using Watershed Algorithm
    Huang, Ying
    Li, Weisheng
    Gao, Mingliang
    Liu, Zheng
    [J]. IEEE ACCESS, 2018, 6 : 47082 - 47091