Self-supervised multimodal learning for group inferences from MRI data: Discovering disorder-relevant brain regions and multimodal links

被引:3
作者
Fedorov, Alex [1 ]
Geenjaar, Eloy [1 ]
Wu, Lei [1 ]
Sylvain, Tristan [4 ]
DeRamus, Thomas P. [1 ]
Luck, Margaux [2 ]
Misiura, Maria [1 ]
Mittapalle, Girish [1 ]
Hjelm, R. Devon [2 ,3 ]
Plis, Sergey M. [1 ]
Calhoun, Vince D. [1 ]
机构
[1] Georgia Tech, Triinst Ctr Translat Res Neuroimaging & Data Sci T, Atlanta, GA 30332 USA
[2] Mila Quebec AI Inst, Montreal, PQ, Canada
[3] Apple Machine Learning Res, Seattle, WA USA
[4] Borealis AI, Montreal, PQ, Canada
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
Deep learning; Multimodal data; Mutual information; Self-supervised learning; Alzheimer's disease; MILD COGNITIVE IMPAIRMENT; INDEPENDENT COMPONENT ANALYSIS; ANTERIOR CINGULATE CORTEX; ALZHEIMERS-DISEASE; FUNCTIONAL CONNECTIVITY; FMRI; REPRESENTATION; HYPOMETABOLISM; STIMULATION; DIAGNOSIS;
D O I
10.1016/j.neuroimage.2023.120485
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
In recent years, deep learning approaches have gained significant attention in predicting brain disorders using neuroimaging data. However, conventional methods often rely on single -modality data and supervised models, which provide only a limited perspective of the intricacies of the highly complex brain. Moreover, the scarcity of accurate diagnostic labels in clinical settings hinders the applicability of the supervised models. To address these limitations, we propose a novel self -supervised framework for extracting multiple representations from multimodal neuroimaging data to enhance group inferences and enable analysis without resorting to labeled data during pre -training. Our approach leverages Deep InfoMax (DIM), a self -supervised methodology renowned for its efficacy in learning representations by estimating mutual information without the need for explicit labels. While DIM has shown promise in predicting brain disorders from single -modality MRI data, its potential for multimodal data remains untapped. This work extends DIM to multimodal neuroimaging data, allowing us to identify disorder -relevant brain regions and explore multimodal links. We present compelling evidence of the efficacy of our multimodal DIM analysis in uncovering disorder -relevant brain regions, including the hippocampus, caudate, insula, - and multimodal links with the thalamus, precuneus, and subthalamus hypothalamus. Our self -supervised representations demonstrate promising capabilities in predicting the presence of brain disorders across a spectrum of Alzheimer's phenotypes. Comparative evaluations against state-of-the-art unsupervised methods based on autoencoders, canonical correlation analysis, and supervised models highlight the superiority of our proposed method in achieving improved classification performance, capturing joint information, and interpretability capabilities. The computational efficiency of the decoder -free strategy enhances its practical utility, as it saves compute resources without compromising performance. This work offers a significant step forward in addressing the challenge of understanding multimodal links in complex brain disorders, with potential applications in neuroimaging research and clinical diagnosis.
引用
收藏
页数:32
相关论文
共 158 条
  • [1] Deep learning encodes robust discriminative neuroimaging representations to outperform standard machine learning
    Abrol, Anees
    Fu, Zening
    Salman, Mustafa
    Silva, Rogers
    Du, Yuhui
    Plis, Sergey
    Calhoun, Vince
    [J]. NATURE COMMUNICATIONS, 2021, 12 (01)
  • [2] Sensorimotor Network Rewiring in Mild Cognitive Impairment and Alzheimer's Disease
    Agosta, Federica
    Rocca, Maria Assunta
    Pagani, Elisabetta
    Absinta, Martina
    Magnani, Giuseppe
    Marcone, Alessandra
    Falautano, Monica
    Comi, Giancarlo
    Gorno-Tempini, Maria Luisa
    Filippi, Massimo
    [J]. HUMAN BRAIN MAPPING, 2010, 31 (04) : 515 - 525
  • [3] Optuna: A Next-generation Hyperparameter Optimization Framework
    Akiba, Takuya
    Sano, Shotaro
    Yanase, Toshihiko
    Ohta, Takeru
    Koyama, Masanori
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2623 - 2631
  • [4] Alain G, 2018, Arxiv, DOI [arXiv:1610.01644, DOI 10.48550/ARXIV.1610.01644]
  • [5] Alayrac JB, 2020, ADV NEUR IN, V33
  • [6] Anand A, 2019, ADV NEUR IN, V32
  • [7] Andrew G., 2013, INT C MACHINE LEARNI
  • [8] [Anonymous], 2016, P 4 INT C LEARN REPR
  • [9] Araujo A., 2019, Computing receptive fields of convolutional neural networks, DOI DOI 10.23915/DISTILL.00021
  • [10] Arpit D, 2017, PR MACH LEARN RES, V70