Generative Adversarial Network for Trimodal Medical Image Fusion Using Primitive Relationship Reasoning

被引:3
作者
Huang, Jingxue [1 ]
Li, Xiaosong [1 ]
Tan, Haishu [1 ]
Cheng, Xiaoqi [2 ]
机构
[1] Foshan Univ, Sch Phys & Optoelect Engn, Foshan 528225, Peoples R China
[2] Foshan Univ, Guangdong Prov Key Lab Ind Intelligent Inspect Tec, Foshan 528000, Peoples R China
基金
中国国家自然科学基金;
关键词
Medical diagnostic imaging; Feature extraction; Cognition; Image fusion; Computational modeling; Task analysis; Magnetic resonance imaging; Generative adversarial network; trimodal medical image fusion; primitive relationship reasoning; QUALITY ASSESSMENT; FRAMEWORK;
D O I
10.1109/JBHI.2024.3426664
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Medical image fusion has become a hot biomedical image processing technology in recent years. The technology coalesces useful information from different modal medical images onto an informative single fused image to provide reasonable and effective medical assistance. Currently, research has mainly focused on dual-modal medical image fusion, and little attention has been paid on trimodal medical image fusion, which has greater application requirements and clinical significance. For this, the study proposes an end-to-end generative adversarial network for trimodal medical image fusion. Utilizing a multi-scale squeeze and excitation reasoning attention network, the proposed method generates an energy map for each source image, facilitating efficient trimodal medical image fusion under the guidance of an energy ratio fusion strategy. To obtain the global semantic information, we introduced squeeze and excitation reasoning attention blocks and enhanced the global feature by primitive relationship reasoning. Through extensive fusion experiments, we demonstrate that our method yields superior visual results and objective evaluation metric scores compared to state-of-the-art fusion methods. Furthermore, the proposed method also obtained the best accuracy in the glioma segmentation experiment.
引用
收藏
页码:5729 / 5741
页数:13
相关论文
共 61 条
[1]   A practical tutorial on autoencoders for nonlinear feature fusion: Taxonomy, models, software and guidelines [J].
Charte, David ;
Charte, Francisco ;
Garcia, Salvador ;
del Jesus, Maria J. ;
Herrera, Francisco .
INFORMATION FUSION, 2018, 44 :78-96
[2]   Multimodal Fusion Network for Detecting Hyperplastic Parathyroid Glands in SPECT/CT Images [J].
Chen, Meidi ;
Chen, Zijin ;
Xi, Yun ;
Qiao, Xiaoya ;
Chen, Xiaonong ;
Huang, Qiu .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (03) :1524-1534
[3]   A new automated quality assessment algorithm for image fusion [J].
Chen, Yin ;
Blum, Rick S. .
IMAGE AND VISION COMPUTING, 2009, 27 (10) :1421-1432
[4]   Three-Layer Image Representation by an Enhanced Illumination-Based Image Fusion Method [J].
Du, Jiao ;
Li, Weisheng ;
Tan, Hengliang .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2020, 24 (04) :1169-1179
[5]   An overview of multi-modal medical image fusion [J].
Du, Jiao ;
Li, Weisheng ;
Lu, Ke ;
Xiao, Bin .
NEUROCOMPUTING, 2016, 215 :3-20
[6]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965
[7]   Self-Supervised Multi-Modal Hybrid Fusion Network for Brain Tumor Segmentation [J].
Fang, Feiyi ;
Yao, Yazhou ;
Zhou, Tao ;
Xie, Guosen ;
Lu, Jianfeng .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (11) :5310-5320
[8]   A multiscale residual pyramid attention network for medical image fusion [J].
Fu, Jun ;
Li, Weisheng ;
Du, Jiao ;
Huang, Yuping .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 66
[9]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[10]  
Guo Z, 2019, IEEE T RADIAT PLASMA, V3, P162, DOI [10.1109/TRPMS.2018.2890359, 10.1109/trpms.2018.2890359]