BrainMass: Advancing Brain Network Analysis for Diagnosis With Large-Scale Self-Supervised Learning

被引:2
|
作者
Yang, Yanwu [1 ,2 ]
Ye, Chenfei [3 ]
Su, Guinan [4 ]
Zhang, Ziyao [2 ,5 ]
Chang, Zhikai [3 ]
Chen, Hairui [1 ,2 ]
Chan, Piu [6 ]
Yu, Yue [2 ]
Ma, Ting [1 ,2 ,7 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Shenzhen 518000, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Guangdong, Peoples R China
[3] Harbin Inst Technol Shenzhen, Shenzhen 518057, Peoples R China
[4] Tencent Data Platform, Shenzhen 518057, Peoples R China
[5] Chinese Acad Sci, Shenzhen Inst Adv Technol, Paul C Lauterbur Res Ctr Biomed Imaging, Shenzhen 518000, Guangdong, Peoples R China
[6] Capital Med Univ, Xuanwu Hosp, Beijing 100053, Peoples R China
[7] Harbin Inst Technol Shenzhen, Int Res Inst Artificial Intelligence, Shenzhen 518000, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain modeling; Task analysis; Adaptation models; Self-supervised learning; Biological system modeling; Data models; Transformers; brain network; transformer; large-scale; pretrain; CONNECTIVITY; CONNECTOMICS;
D O I
10.1109/TMI.2024.3414476
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Foundation models pretrained on large-scale datasets via self-supervised learning demonstrate exceptional versatility across various tasks. Due to the heterogeneity and hard-to-collect medical data, this approach is especially beneficial for medical image analysis and neuroscience research, as it streamlines broad downstream tasks without the need for numerous costly annotations. However, there has been limited investigation into brain network foundation models, limiting their adaptability and generalizability for broad neuroscience studies. In this study, we aim to bridge this gap. In particular, 1) we curated a comprehensive dataset by collating images from 30 datasets, which comprises 70,781 samples of 46,686 participants. Moreover, we introduce pseudo-functional connectivity (pFC) to further generates millions of augmented brain networks by randomly dropping certain timepoints of the BOLD signal; 2) we propose the BrainMass framework for brain network self-supervised learning via mask modeling and feature alignment. BrainMass employs Mask-ROI Modeling (MRM) to bolster intra-network dependencies and regional specificity. Furthermore, Latent Representation Alignment (LRA) module is utilized to regularize augmented brain networks of the same participant with similar topological properties to yield similar latent representations by aligning their latent embeddings. Extensive experiments on eight internal tasks and seven external brain disorder diagnosis tasks show BrainMass's superior performance, highlighting its significant generalizability and adaptability. Nonetheless, BrainMass demonstrates powerful few/zero-shot learning abilities and exhibits meaningful interpretation to various diseases, showcasing its potential use for clinical applications.
引用
收藏
页码:4004 / 4016
页数:13
相关论文
共 50 条
  • [21] Enhancing diagnostic deep learning via self-supervised pretraining on large-scale, unlabeled non-medical images
    Arasteh, Soroosh Tayebi
    Misera, Leo
    Kather, Jakob Nikolas
    Truhn, Daniel
    Nebelung, Sven
    EUROPEAN RADIOLOGY EXPERIMENTAL, 2024, 8 (01)
  • [22] ON THE IMPACT OF SELF-SUPERVISED LEARNING IN SKIN CANCER DIAGNOSIS
    Verdelho, Maria Rita
    Barata, Catarina
    2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), 2022,
  • [23] Self-Supervised Learning Model for Skin Cancer Diagnosis
    Masood, Ammara
    Al-Jumaily, Adel
    Anam, Khairul
    2015 7TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2015, : 1012 - 1015
  • [24] Self-supervised learning for modal transfer of brain imaging
    Cheng, Dapeng
    Chen, Chao
    Yanyan, Mao
    You, Panlu
    Huang, Xingdan
    Gai, Jiale
    Zhao, Feng
    Mao, Ning
    FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [25] Self-supervised Learning for Endoscopic Video Analysis
    Hirsch, Roy
    Caron, Mathilde
    Cohen, Regev
    Livne, Amir
    Shapiro, Ron
    Golany, Tomer
    Goldenberg, Roman
    Freedman, Daniel
    Rivlin, Ehud
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT V, 2023, 14224 : 569 - 578
  • [26] SELF-SUPERVISED LEARNING FOR INFANT CRY ANALYSIS
    Gorin, Arsenii
    Subakan, Cem
    Abdoli, Sajjad
    Wang, Junhao
    Latremouille, Samantha
    Onu, Charles
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [27] WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
    Chen, Sanyuan
    Wang, Chengyi
    Chen, Zhengyang
    Wu, Yu
    Liu, Shujie
    Chen, Zhuo
    Li, Jinyu
    Kanda, Naoyuki
    Yoshioka, Takuya
    Xiao, Xiong
    Wu, Jian
    Zhou, Long
    Ren, Shuo
    Qian, Yanmin
    Qian, Yao
    Zeng, Michael
    Yu, Xiangzhan
    Wei, Furu
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1505 - 1518
  • [28] SELF-SUPERVISED LEARNING WITH RADIOLOGY REPORTS, A COMPARATIVE ANALYSIS OF STRATEGIES FOR LARGE VESSEL OCCLUSION AND BRAIN CTA IMAGES
    Pachade, S.
    Datta, S.
    Dong, Y.
    Salazar-Marioni, S.
    Abdelkhaleq, R.
    Niktabe, A.
    Roberts, K.
    Sheth, S. A.
    Giancardo, L.
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [29] Large-scale supervised similarity learning in networks
    Shiyu Chang
    Guo-Jun Qi
    Yingzhen Yang
    Charu C. Aggarwal
    Jiayu Zhou
    Meng Wang
    Thomas S. Huang
    Knowledge and Information Systems, 2016, 48 : 707 - 740
  • [30] Large-scale supervised similarity learning in networks
    Chang, Shiyu
    Qi, Guo-Jun
    Yang, Yingzhen
    Aggarwal, Charu C.
    Zhou, Jiayu
    Wang, Meng
    Huang, Thomas S.
    KNOWLEDGE AND INFORMATION SYSTEMS, 2016, 48 (03) : 707 - 740