GeSeNet: A General Semantic-Guided Network With Couple Mask Ensemble for Medical Image Fusion

被引:27
作者
Li, Jiawei [1 ]
Liu, Jinyuan [2 ]
Zhou, Shihua [1 ]
Zhang, Qiang [3 ]
Kasabov, Nikola K. [4 ,5 ,6 ]
机构
[1] Dalian Univ, Key Lab Adv Design & Intelligent Comp, Minist Educ, Sch Software Engn, Dalian 116622, Peoples R China
[2] Dalian Univ Technol, Sch Mech Engn, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[4] Auckland Univ Technol, Knowledge Engn & Discovery Res Inst, Auckland 1061, New Zealand
[5] Univ Ulster, Intelligent Syst Res Ctr, Coleraine BT48 7JL, Londonderry, North Ireland
[6] Bulgarian Acad Sci, IICT, Sofia 1000, Bulgaria
基金
中国国家自然科学基金;
关键词
Image fusion; Semantics; Medical diagnostic imaging; Feature extraction; Image edge detection; Magnetic resonance imaging; Discrete wavelet transforms; multimodal medical image; region mask; semantic information;
D O I
10.1109/TNNLS.2023.3293274
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
At present, multimodal medical image fusion technology has become an essential means for researchers and doctors to predict diseases and study pathology. Nevertheless, how to reserve more unique features from different modal source images on the premise of ensuring time efficiency is a tricky problem. To handle this issue, we propose a flexible semantic-guided architecture with a mask-optimized framework in an end-to-end manner, termed as GeSeNet. Specifically, a region mask module is devised to deepen the learning of important information while pruning redundant computation for reducing the runtime. An edge enhancement module and a global refinement module are presented to modify the extracted features for boosting the edge textures and adjusting overall visual performance. In addition, we introduce a semantic module that is cascaded with the proposed fusion network to deliver semantic information into our generated results. Sufficient qualitative and quantitative comparative experiments (i.e., MRI-CT, MRI-PET, and MRI-SPECT) are deployed between our proposed method and ten state-of-the-art methods, which shows our generated images lead the way. Moreover, we also conduct operational efficiency comparisons and ablation experiments to prove that our proposed method can perform excellently in the field of multimodal medical image fusion. The code is available at <uri>https://github.com/lok-18/GeSeNet</uri>.
引用
收藏
页码:16248 / 16261
页数:14
相关论文
共 57 条
[21]   Attention-Guided Global-Local Adversarial Learning for Detail-Preserving Multi-Exposure Image Fusion [J].
Liu, Jinyuan ;
Shang, Jingjie ;
Liu, Risheng ;
Fan, Xin .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (08) :5026-5040
[22]   Learning a Deep Multi-Scale Feature Ensemble and an Edge-Attention Guidance for Image Fusion [J].
Liu, Jinyuan ;
Fan, Xin ;
Jiang, Ji ;
Liu, Risheng ;
Luo, Zhongxuan .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (01) :105-119
[23]   A Unified Pansharpening Method With Structure Tensor Driven Spatial Consistency and Deep Plug-and-Play Priors [J].
Liu, Pengfei ;
Liu, Jiahui ;
Xiao, Liang .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[24]   A Bilevel Integrated Model With Data-Driven Layer Ensemble for Multi-Modality Image Fusion [J].
Liu, Risheng ;
Liu, Jinyuan ;
Jiang, Zhiying ;
Fan, Xin ;
Luo, Zhongxuan .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 :1261-1274
[25]   Medical Image Fusion via Convolutional Sparsity Based Morphological Component Analysis [J].
Liu, Yu ;
Chen, Xun ;
Ward, Rabab K. ;
Wang, Z. Jane .
IEEE SIGNAL PROCESSING LETTERS, 2019, 26 (03) :485-489
[26]  
Liu Y, 2017, 2017 20TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), P1070
[27]   A general framework for image fusion based on multi-scale transform and sparse representation [J].
Liu, Yu ;
Liu, Shuping ;
Wang, Zengfu .
INFORMATION FUSION, 2015, 24 :147-164
[28]   DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion [J].
Ma, Jiayi ;
Xu, Han ;
Jiang, Junjun ;
Mei, Xiaoguang ;
Zhang, Xiao-Ping .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4980-4995
[29]   Bilateral attention decoder: A lightweight decoder for real-time semantic segmentation [J].
Peng, Chengli ;
Tian, Tian ;
Chen, Chen ;
Guo, Xiaojie ;
Ma, Jiayi .
NEURAL NETWORKS, 2021, 137 :188-199
[30]   CT and MR Images Fusion Based on Stationary Wavelet Transform by Modulus Maxima [J].
Prakash, Om ;
Khare, Ashish .
COMPUTATIONAL VISION AND ROBOTICS, 2015, 332 :199-204