Exploiting Partial Common Information Microstructure for Multi-modal Brain Tumor Segmentation

被引:5
作者
Mei, Yongsheng [1 ]
Venkataramani, Guru [1 ]
Lan, Tian [1 ]
机构
[1] George Washington Univ, Washington, DC 20052 USA
来源
MACHINE LEARNING FOR MULTIMODAL HEALTHCARE DATA, ML4MHD 2023 | 2024年 / 14315卷
关键词
Multi-modal learning; Image segmentation; Maximal correlation optimization; Common information;
D O I
10.1007/978-3-031-47679-2_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning with multiple modalities is crucial for automated brain tumor segmentation from magnetic resonance imaging data. Explicitly optimizing the common information shared among all modalities (e.g., by maximizing the total correlation) has been shown to achieve better feature representations and thus enhance the segmentation performance. However, existing approaches are oblivious to partial common information shared by subsets of the modalities. In this paper, we show that identifying such partial common information can significantly boost the discriminative power of image segmentation models. In particular, we introduce a novel concept of partial common information mask (PCI-mask) to provide a fine-grained characterization of what partial common information is shared by which subsets of the modalities. By solving a masked correlation maximization and simultaneously learning an optimal PCI-mask, we identify the latent microstructure of partial common information and leverage it in a self-attention module to selectively weight different feature representations in multi-modal data. We implement our proposed framework on the standard U-Net. Our experimental results on the Multi-modal Brain Tumor Segmentation Challenge (BraTS) datasets outperform those of state-of-the-art segmentation baselines, with validation Dice similarity coefficients of 0.920, 0.897, 0.837 for the whole tumor, tumor core, and enhancing tumor on BraTS-2020.
引用
收藏
页码:64 / 85
页数:22
相关论文
共 50 条
[31]   Msfusenet: a multi-stage information fusion network for multi-modal skin lesion diagnosis [J].
Xiao, Yukun ;
Yu, Long ;
Tian, Shengwei ;
Yu, Shirong ;
Zhang, Dezhi ;
Kang, Xiaojing ;
Wu, Weidong ;
Lu, Rui .
MULTIMEDIA SYSTEMS, 2025, 31 (03)
[32]   Brain Tumor Segmentation Using Partial Depthwise Separable Convolutions [J].
Magadza, Tirivangani ;
Viriri, Serestina .
IEEE ACCESS, 2022, 10 :124206-124216
[33]   DMFNet: Deep Multi-Modal Fusion Network for RGB-D Indoor Scene Segmentation [J].
Yuan, Jianzhong ;
Zhou, Wujie ;
Luo, Ting .
IEEE ACCESS, 2019, 7 :169350-169358
[34]   Adversarial unsupervised domain adaptation for 3D semantic segmentation with multi-modal learning [J].
Liu, Wei ;
Luo, Zhiming ;
Cai, Yuanzheng ;
Yu, Ying ;
Ke, Yang ;
Marcato Junior, Jose ;
Goncalves, Wesley Nunes ;
Li, Jonathan .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2021, 176 :211-221
[35]   Multi-modal Fake News Detection on Social Media via Multi-grained Information Fusion [J].
Zhou, Yangming ;
Yang, Yuzhou ;
Ying, Qichao ;
Qian, Zhenxing ;
Zhang, Xinpeng .
PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, :343-352
[36]   GMAlignNet: multi-scale lightweight brain tumor image segmentation with enhanced semantic information consistency [J].
Song, Jianli ;
Lu, Xiaoqi ;
Gu, Yu .
PHYSICS IN MEDICINE AND BIOLOGY, 2024, 69 (11)
[37]   Nasopharyngeal carcinoma segmentation based on enhanced convolutional neural networks using multi-modal metric learning [J].
Ma, Zongqing ;
Zhou, Shuang ;
Wu, Xi ;
Zhang, Heye ;
Yan, Weijie ;
Sun, Shanhui ;
Zhou, Jiliu .
PHYSICS IN MEDICINE AND BIOLOGY, 2019, 64 (02)
[38]   ELEMENT: Multi-Modal Retinal Vessel Segmentation Based on a Coupled Region Growing and Machine Learning Approach [J].
Rodrigues, Erick ;
Conci, Aura ;
Liatsis, Panos .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2020, 24 (12) :3507-3519
[39]   Multi-modal data integration using graph for collaborative assembly design information sharing and reuse [J].
Lee, Hyung-Jae ;
Kim, Kyoung-Yun ;
Yang, Hyung-Jeong ;
Kim, Soo-Hyung ;
Choi, Sook-Young .
NEW TRENDS IN APPLIED ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2007, 4570 :521-+
[40]   Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy Minimization for Multi-Modal Cardiac Image Segmentation [J].
Vesal, Sulaiman ;
Gu, Mingxuan ;
Kosti, Ronak ;
Maier, Andreas ;
Ravikumar, Nishant .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (07) :1838-1851