BrainMass: Advancing Brain Network Analysis for Diagnosis With Large-Scale Self-Supervised Learning

被引:2
|
作者
Yang, Yanwu [1 ,2 ]
Ye, Chenfei [3 ]
Su, Guinan [4 ]
Zhang, Ziyao [2 ,5 ]
Chang, Zhikai [3 ]
Chen, Hairui [1 ,2 ]
Chan, Piu [6 ]
Yu, Yue [2 ]
Ma, Ting [1 ,2 ,7 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Shenzhen 518000, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Guangdong, Peoples R China
[3] Harbin Inst Technol Shenzhen, Shenzhen 518057, Peoples R China
[4] Tencent Data Platform, Shenzhen 518057, Peoples R China
[5] Chinese Acad Sci, Shenzhen Inst Adv Technol, Paul C Lauterbur Res Ctr Biomed Imaging, Shenzhen 518000, Guangdong, Peoples R China
[6] Capital Med Univ, Xuanwu Hosp, Beijing 100053, Peoples R China
[7] Harbin Inst Technol Shenzhen, Int Res Inst Artificial Intelligence, Shenzhen 518000, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain modeling; Task analysis; Adaptation models; Self-supervised learning; Biological system modeling; Data models; Transformers; brain network; transformer; large-scale; pretrain; CONNECTIVITY; CONNECTOMICS;
D O I
10.1109/TMI.2024.3414476
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Foundation models pretrained on large-scale datasets via self-supervised learning demonstrate exceptional versatility across various tasks. Due to the heterogeneity and hard-to-collect medical data, this approach is especially beneficial for medical image analysis and neuroscience research, as it streamlines broad downstream tasks without the need for numerous costly annotations. However, there has been limited investigation into brain network foundation models, limiting their adaptability and generalizability for broad neuroscience studies. In this study, we aim to bridge this gap. In particular, 1) we curated a comprehensive dataset by collating images from 30 datasets, which comprises 70,781 samples of 46,686 participants. Moreover, we introduce pseudo-functional connectivity (pFC) to further generates millions of augmented brain networks by randomly dropping certain timepoints of the BOLD signal; 2) we propose the BrainMass framework for brain network self-supervised learning via mask modeling and feature alignment. BrainMass employs Mask-ROI Modeling (MRM) to bolster intra-network dependencies and regional specificity. Furthermore, Latent Representation Alignment (LRA) module is utilized to regularize augmented brain networks of the same participant with similar topological properties to yield similar latent representations by aligning their latent embeddings. Extensive experiments on eight internal tasks and seven external brain disorder diagnosis tasks show BrainMass's superior performance, highlighting its significant generalizability and adaptability. Nonetheless, BrainMass demonstrates powerful few/zero-shot learning abilities and exhibits meaningful interpretation to various diseases, showcasing its potential use for clinical applications.
引用
收藏
页码:4004 / 4016
页数:13
相关论文
共 50 条
  • [1] Self-supervised Learning for Large-scale Item Recommendations
    Yao, Tiansheng
    Yi, Xinyang
    Cheng, Derek Zhiyuan
    Yu, Felix
    Chen, Ting
    Menon, Aditya
    Hong, Lichan
    Chi, Ed H.
    Tjoa, Steve
    Kang, Jieqi
    Ettinger, Evan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4321 - 4330
  • [2] Self-supervised contrastive representation learning for large-scale trajectories
    Li, Shuzhe
    Chen, Wei
    Yan, Bingqi
    Li, Zhen
    Zhu, Shunzhi
    Yu, Yanwei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 148 : 357 - 366
  • [3] Automated Large-Scale Cell Annotation with Self-Supervised Learning
    Tang, Yuan Xi
    Huan, Le
    Xia, Can
    Lin, Fulai
    Zhao, Yundi
    JOURNAL OF THE AMERICAN COLLEGE OF SURGEONS, 2024, 239 (05) : S182 - S182
  • [4] LARGE-SCALE SELF-SUPERVISED SPEECH REPRESENTATION LEARNING FOR AUTOMATIC SPEAKER VERIFICATION
    Chen, Zhengyang
    Chen, Sanyuan
    Wu, Yu
    Qian, Yao
    Wang, Chengyi
    Liu, Shujie
    Qian, Yanmin
    Zeng, Michael
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6147 - 6151
  • [5] Large-Scale Self-Supervised Human Activity Recognition
    Zadeh, Mohammad Zaki
    Jaiswal, Ashish
    Pavel, Hamza Reza
    Hebri, Aref
    Kapoor, Rithik
    Makedon, Fillia
    PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2022, 2022, : 298 - 299
  • [6] Self-Supervised Pretraining for Large-Scale Point Clouds
    Zhang, Zaiwei
    Bai, Min
    Li, Erran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [7] Self-supervised cognitive learning for multifaced interest in large-scale industrial recommender systems
    Wang, Yingshuai
    Zhang, Dezheng
    Wulamu, Aziguli
    INFORMATION SCIENCES, 2025, 686
  • [8] ContrastMotion: Self-supervised Scene Motion Learning for Large-Scale LiDAR Point Clouds
    Jia, Xiangze
    Zhou, Hui
    Zhu, Xinge
    Guo, Yandong
    Zhang, Ji
    Ma, Yuexin
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 929 - 937
  • [9] Self-Supervised Graph Transformer on Large-Scale Molecular Data
    Rong, Yu
    Bian, Yatao
    Xu, Tingyang
    Xie, Weiyang
    Wei, Ying
    Huang, Wenbing
    Huang, Junzhou
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [10] Self-supervised Natural Image Reconstruction and Large-scale Semantic Classification from Brain Activity
    Gaziv, Guy
    Beliy, Roman
    Granot, Niv
    Hoogi, Assaf
    Strappini, Francesca
    Golan, Tal
    Irani, Michal
    NEUROIMAGE, 2022, 254