Multi-modal brain tumor segmentation via disentangled representation learning and region-aware contrastive learning

被引:10
作者
Zhou, Tongxue [1 ]
机构
[1] Hangzhou Normal Univ, Sch Informat Sci & Technol, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; Multi-modal feature fusion; Disentangled representation learning; Contrastive learning; SURVIVAL; NET;
D O I
10.1016/j.patcog.2024.110282
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain tumors are threatening the life and health of people in the world. Automatic brain tumor segmentation using multiple MR images is challenging in medical image analysis. It is known that accurate segmentation relies on effective feature learning. Existing methods address the multi -modal MR brain tumor segmentation by explicitly learning a shared feature representation. However, these methods fail to capture the relationship between MR modalities and the feature correlation between different target tumor regions. In this paper, I propose a multi -modal brain tumor segmentation network via disentangled representation learning and region -aware contrastive learning. Specifically, a feature fusion module is first designed to learn the valuable multi -modal feature representation. Subsequently, a novel disentangled representation learning is proposed to decouple the fused feature representation into multiple factors corresponding to the target tumor regions. Furthermore, contrastive learning is presented to help the network extract tumor region -related feature representations. Finally, the segmentation results are obtained using the segmentation decoders. Quantitative and qualitative experiments conducted on the public datasets, BraTS 2018 and BraTS 2019, justify the importance of the proposed strategies, and the proposed approach can achieve better performance than other state-of-the-art approaches. In addition, the proposed strategies can be extended to other deep neural networks.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] EMP: Emotion-guided Multi-modal Fusion and Contrastive Learning for Personality Traits Recognition
    Wang, Yusong
    Li, Dongyuan
    Funakoshi, Kotaro
    Okumura, Manabu
    PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 243 - 252
  • [42] Self-supervised region-aware segmentation of COVID-19 CT images using 3D GAN and contrastive learning
    Shabani, Siyavash
    Homayounfar, Morteza
    Vardhanabhuti, Varut
    Mahani, Mohammad-Ali Nikouei
    Koohi-Moghadam, Mohamad
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 149
  • [43] Multi-modal disease segmentation with continual learning and adaptive decision fusion
    Xu, Xu
    Chen, Junxin
    Thakur, Dipanwita
    Hong, Duo
    INFORMATION FUSION, 2025, 118
  • [44] Overcoming the challenges of multi-modal medical image sharing: A novel data distillation strategy via contrastive learning
    Du, Taoli
    Li, Wenhui
    Wang, Zeyu
    Yang, Feiyang
    Teng, Peihong
    Yi, Xingcheng
    Chen, Hongyu
    Wang, Zixuan
    Zhang, Ping
    Zhang, Tianyang
    NEUROCOMPUTING, 2025, 617
  • [45] Multi-modal Robustness Fake News Detection with Cross-Modal and Propagation Network Contrastive Learning
    Chen, Han
    Wang, Hairong
    Liu, Zhipeng
    Li, Yuhua
    Hu, Yifan
    Zhang, Yujing
    Shu, Kai
    Li, Ruixuan
    Yu, Philip S.
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [46] Automated brain tumor segmentation on multi-modal MR image using SegNet
    Salma Alqazzaz
    Xianfang Sun
    Xin Yang
    Len Nokes
    Computational Visual Media, 2019, 5 : 209 - 219
  • [47] Automated brain tumor segmentation on multi-modal MR image using SegNet
    Alqazzaz, Salma
    Sun, Xianfang
    Yang, Xin
    Nokes, Len
    COMPUTATIONAL VISUAL MEDIA, 2019, 5 (02) : 209 - 219
  • [48] Automated brain tumor segmentation on multi-modal MR image using SegNet
    Salma Alqazzaz
    Xianfang Sun
    Xin Yang
    Len Nokes
    Computational Visual Media, 2019, 5 (02) : 209 - 219
  • [49] A Multi-modal Framework with Contrastive Learning and Sequential Encoding for Enhanced Sleep Stage Detection
    Wang, Zehui
    Zhang, Zhihan
    Wang, Hongtao
    PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 3 - 17
  • [50] Nodule-CLIP: Lung nodule classification based on multi-modal contrastive learning
    Sun L.
    Zhang M.
    Lu Y.
    Zhu W.
    Yi Y.
    Yan F.
    Computers in Biology and Medicine, 175