Masked Deformation Modeling for Volumetric Brain MRI Self-Supervised Pre-Training

被引:0
作者
Lyu, Junyan [1 ,2 ]
Bartlett, Perry F. [2 ]
Nasrallah, Fatima A. [2 ]
Tang, Xiaoying [1 ,3 ]
机构
[1] Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen 518055, Peoples R China
[2] Univ Queensland, Queensland Brain Inst, St Lucia, Qld 4072, Australia
[3] Southern Univ Sci & Technol, Jiaxing Res Inst, Jiaxing 314031, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain; Magnetic resonance imaging; Deformation; Brain modeling; Image segmentation; Image restoration; Biomedical imaging; Annotations; Feature extraction; Lesions; Self-supervised learning; masked deformation modeling; brain segmentation; DIFFEOMORPHIC IMAGE REGISTRATION; SEGMENTATION; HIPPOCAMPUS; MORPHOMETRY; PATTERNS; RESOURCE; ATLAS;
D O I
10.1109/TMI.2024.3510922
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Self-supervised learning (SSL) has been proposed to alleviate neural networks' reliance on annotated data and to improve downstream tasks' performance, which has obtained substantial success in several volumetric medical image segmentation tasks. However, most existing approaches are designed and pre-trained on CT or MRI datasets of non-brain organs. The lack of brain prior limits those methods' performance on brain segmentation, especially on fine-grained brain parcellation. To overcome this limitation, we here propose a novel SSL strategy for MRI of the human brain, named Masked Deformation Modeling (MDM). MDM first conducts atlas-guided patch sampling on individual brain MRI scans (moving volumes) and an MNI152 template (a fixed volume). The sampled moving volumes are randomly masked in a feature-aligned manner, and then sent into a U-Net-based network to extract latent features. An intensity head and a deformation field head are used to decode the latent features, respectively restoring the masked volume and predicting the deformation field from the moving volume to the fixed volume. The proposed MDM is fine-tuned and evaluated on three brain parcellation datasets with different granularities (JHU, Mindboggle-101, CANDI), a brain lesion segmentation dataset (ATLAS2), and a brain tumor segmentation dataset (BraTS21). Results demonstrate that MDM outperforms various state-of-the-art medical SSL methods by considerable margins, and can effectively reduce the annotation effort by at least 40%. Codes and pre-trained weights will be released at https://github.com/CRazorback/MDM.
引用
收藏
页码:1596 / 1607
页数:12
相关论文
共 50 条
[31]   BRAVEN: IMPROVING SELF-SUPERVISED PRE-TRAINING FOR VISUAL AND AUDITORY SPEECH RECOGNITION [J].
Haliassos, Alexandros ;
Zinonos, Andreas ;
Mira, Rodrigo ;
Petridis, Stavros ;
Pantie, Maja .
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024), 2024, :11431-11435
[32]   Self-supervised depth super-resolution with contrastive multiview pre-training [J].
Qiao, Xin ;
Ge, Chenyang ;
Zhao, Chaoqiang ;
Tosi, Fabio ;
Poggi, Matteo ;
Mattoccia, Stefano .
NEURAL NETWORKS, 2023, 168 :223-236
[33]   Interpretive Self-Supervised Pre-training: Boosting Performance on Visual Medical Data [J].
Manna, Siladittya ;
Bhattacharya, Saumik ;
Pal, Umapada .
PROCEEDINGS OF THE TWELFTH INDIAN CONFERENCE ON COMPUTER VISION, GRAPHICS AND IMAGE PROCESSING, ICVGIP 2021, 2021,
[34]   A SELF-SUPERVISED PRE-TRAINING FRAMEWORK FOR VISION-BASED SEIZURE CLASSIFICATION [J].
Hou, Jen-Cheng ;
McGonigal, Aileen ;
Bartolomei, Fabrice ;
Thonnat, Monique .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :1151-1155
[35]   Reducing Barriers to Self-Supervised Learning: HuBERT Pre-training with Academic Compute [J].
Chen, William ;
Chang, Xuankai ;
Peng, Yifan ;
Ni, Zhaoheng ;
Maiti, Soumi ;
Watanabe, Shinji .
INTERSPEECH 2023, 2023, :4404-4408
[36]   Voice Deepfake Detection Using the Self-Supervised Pre-Training Model HuBERT [J].
Li, Lanting ;
Lu, Tianliang ;
Ma, Xingbang ;
Yuan, Mengjiao ;
Wan, Da .
APPLIED SCIENCES-BASEL, 2023, 13 (14)
[37]   Mutual information-driven self-supervised point cloud pre-training [J].
Xu, Weichen ;
Fu, Tianhao ;
Cao, Jian ;
Zhao, Xinyu ;
Xu, Xinxin ;
Cao, Xixin ;
Zhang, Xing .
KNOWLEDGE-BASED SYSTEMS, 2025, 307
[38]   Self-supervised pre-training improves fundus image classification for diabetic retinopathy [J].
Lee, Joohyung ;
Lee, Eung-Joo .
REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2022, 2022, 12102
[39]   INTEGRATING SELF-SUPERVISED PRE-TRAINING WITH ADVERSARIAL LEARNING FOR SYNTHESIZED SONG DETECTION [J].
Wang, Yankai ;
Du, Yuxuan ;
Zhang, Dejun ;
Zheng, Rong ;
Deng, Jing .
2024 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2024, :795-802
[40]   A debiased self-training framework with graph self-supervised pre-training aided for semi-supervised rumor detection [J].
Qiao, Yuhan ;
Cui, Chaoqun ;
Wang, Yiying ;
Jia, Caiyan .
NEUROCOMPUTING, 2024, 604