STPGANsFusion: Structure and Texture Preserving Generative Adversarial Networks for Multi-modal Medical Image Fusion

被引:1
|
作者
Shah, Dhruvi [1 ]
Wani, Hareshwar [1 ]
Das, Manisha [1 ]
Gupta, Deep [1 ]
Radeva, Petia [2 ,3 ]
Bakde, Ashwini [4 ]
机构
[1] Visvesvaraya Natl Inst Technol, Dept Elect & Commun Engn, Nagpur, Maharashtra, India
[2] Univ Barcelona, Dept Math & Comp Sci, Barcelona, Spain
[3] Comp Vis Ctr, Barcelona, Spain
[4] All India Inst Med Sci, Dept Radiodiag, Nagpur, Maharashtra, India
关键词
Computed tomography; single-photon emission computed tomography; positron emission tomography; image fusion; generative adversarial networks; structure-texture decomposition; PERFORMANCE;
D O I
10.1109/NCC55593.2022.9806733
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Medical images from various modalities carry diverse information. The features from these source images are combined into a single image, constituting more information content, beneficial for subsequent medical applications. Recently, deep learning (DL) based networks have demonstrated the ability to produce promising fusion results by integrating the feature extraction and preservation task with less manual interventions. However, using a single network for extracting features from multi-modal source images characterizing distinct information results in the loss of crucial diagnostic information. Addressing this problem, we present structure and texture preserving generative adversarial networks based medical image fusion method (STPGANsFusion). Initially, the textural and structural components of the source images are separated using structure gradient and texture decorrelating regularizer (SGTDR) based image decomposition for more complementary information preservation and higher robustness for the model. Next, the fusion of the structure and the texture components is carried out using two generative adversarial networks (GANs) consisting of a generator and two discriminators to get fused structure and texture components. The loss function for each GAN is framed as per the characteristic of the component being fused to minimize the loss of complementary information. The fused image is reconstructed and undergoes adaptive mask-based structure enhancement to further boost its contrast and visualization. Substantial experimentation is carried out on a wide variety of neurological images. Visual and qualitative results exhibit notable improvement in the fusion performance of the proposed method in comparison to the state-of-the-art fusion methods.
引用
收藏
页码:172 / 177
页数:6
相关论文
共 50 条
  • [1] Using Generative Adversarial Networks and Ensemble Learning for Multi-Modal Medical Image Fusion to Improve the Diagnosis of Rare Neurological Disorders
    Reddy, Bhargavi Peddi
    Rangaswamy, K.
    Bharadwaja, Doradla
    Dupaty, Mani Mohan
    Sarkar, Partha
    Al Ansari, Mohammed Saleh
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (11) : 1063 - 1072
  • [2] Fusing BO and LiDAR for SAR Image Translation with Multi-Modal Generative Adversarial Networks
    Zhu, Jiang
    Qing, Yuanyuan
    Lin, Zhiping
    Wen, Kilian
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [3] SHAPE-CONSISTENT GENERATIVE ADVERSARIAL NETWORKS FOR MULTI-MODAL MEDICAL SEGMENTATION MAPS
    Segre, Leo
    Hirschorn, Or
    Ginzburg, Dvir
    Raviv, Dan
    2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), 2022,
  • [4] Multi-modal medical image fusion based on wavelet transform and texture measure
    Kang Yuanyuan
    Li Bin
    Tian Lianfang
    Mao Zongyuan
    PROCEEDINGS OF THE 26TH CHINESE CONTROL CONFERENCE, VOL 6, 2007, : 697 - +
  • [5] An overview of multi-modal medical image fusion
    Du, Jiao
    Li, Weisheng
    Lu, Ke
    Xiao, Bin
    NEUROCOMPUTING, 2016, 215 : 3 - 20
  • [6] Multi-Source Medical Image Fusion Based on Wasserstein Generative Adversarial Networks
    Yang, Zhiguang
    Chen, Youping
    Le, Zhuliang
    Fan, Fan
    Pan, Erting
    IEEE ACCESS, 2019, 7 : 175947 - 175958
  • [7] A novel multi-modal medical image fusion algorithm
    Xinhua Li
    Jing Zhao
    Journal of Ambient Intelligence and Humanized Computing, 2021, 12 : 1995 - 2002
  • [8] A novel multi-modal medical image fusion algorithm
    Li, Xinhua
    Zhao, Jing
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 12 (02) : 1995 - 2002
  • [9] Multi-modal generative adversarial networks for traffic event detection in smart cities
    Chen, Qi
    Wang, Wei
    Huang, Kaizhu
    De, Suparna
    Coenen, Frans
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 177
  • [10] DiamondGAN: Unified Multi-modal Generative Adversarial Networks for MRI Sequences Synthesis
    Li, Hongwei
    Paetzold, Johannes C.
    Sekuboyina, Anjany
    Kofler, Florian
    Zhang, Jianguo
    Kirschke, Jan S.
    Wiestler, Benedikt
    Menze, Bjoern
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT IV, 2019, 11767 : 795 - 803