Multi-modal brain tumor segmentation via disentangled representation learning and region-aware contrastive learning

被引:16
作者
Zhou, Tongxue [1 ]
机构
[1] Hangzhou Normal Univ, Sch Informat Sci & Technol, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; Multi-modal feature fusion; Disentangled representation learning; Contrastive learning; SURVIVAL; NET;
D O I
10.1016/j.patcog.2024.110282
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain tumors are threatening the life and health of people in the world. Automatic brain tumor segmentation using multiple MR images is challenging in medical image analysis. It is known that accurate segmentation relies on effective feature learning. Existing methods address the multi -modal MR brain tumor segmentation by explicitly learning a shared feature representation. However, these methods fail to capture the relationship between MR modalities and the feature correlation between different target tumor regions. In this paper, I propose a multi -modal brain tumor segmentation network via disentangled representation learning and region -aware contrastive learning. Specifically, a feature fusion module is first designed to learn the valuable multi -modal feature representation. Subsequently, a novel disentangled representation learning is proposed to decouple the fused feature representation into multiple factors corresponding to the target tumor regions. Furthermore, contrastive learning is presented to help the network extract tumor region -related feature representations. Finally, the segmentation results are obtained using the segmentation decoders. Quantitative and qualitative experiments conducted on the public datasets, BraTS 2018 and BraTS 2019, justify the importance of the proposed strategies, and the proposed approach can achieve better performance than other state-of-the-art approaches. In addition, the proposed strategies can be extended to other deep neural networks.
引用
收藏
页数:11
相关论文
共 58 条
[1]  
Avants B.B., 2009, Insight J., V2, P1, DOI DOI 10.54294/UVNHIN
[2]  
Bakas S., 2018, ARXIV181102629
[3]   Data Descriptor: Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features [J].
Bakas, Spyridon ;
Akbari, Hamed ;
Sotiras, Aristeidis ;
Bilello, Michel ;
Rozycki, Martin ;
Kirby, Justin S. ;
Freymann, John B. ;
Farahani, Keyvan ;
Davatzikos, Christos .
SCIENTIFIC DATA, 2017, 4
[4]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[5]  
Chaitanya K., 2020, Advances in neural information processing systems, V33, P12546
[6]   Disentangle, Align and Fuse for Multimodal and Semi-Supervised Image Segmentation [J].
Chartsias, Agisilaos ;
Papanastasiou, Giorgos ;
Wang, Chengjia ;
Semple, Scott ;
Newby, David E. ;
Dharmakumar, Rohan ;
Tsaftaris, Sotirios A. .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (03) :781-792
[7]   Disentangled representation learning in cardiac image analysis [J].
Chartsias, Agisilaos ;
Joyce, Thomas ;
Papanastasiou, Giorgos ;
Semple, Scott ;
Williams, Michelle ;
Newby, David E. ;
Dharmakumar, Rohan ;
Tsaftaris, Sotirios A. .
MEDICAL IMAGE ANALYSIS, 2019, 58
[8]   Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion [J].
Chen, Cheng ;
Dou, Qi ;
Jin, Yueming ;
Chen, Hao ;
Qin, Jing ;
Pheng-Ann Heng .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT III, 2019, 11766 :447-456
[9]   Rethinking the unpretentious U-net for medical ultrasound image segmentation [J].
Chen, Gongping ;
Li, Lei ;
Zhang, Jianxun ;
Dai, Yu .
PATTERN RECOGNITION, 2023, 142
[10]   Dual-force convolutional neural networks for accurate brain tumor segmentation [J].
Chen, Shengcong ;
Ding, Changxing ;
Liu, Minfeng .
PATTERN RECOGNITION, 2019, 88 :90-100