Discriminative analysis of early Alzheimer's disease using multi-modal imaging and multi-level characterization with multi-classifier (M3)

被引:237
|
作者
Dai, Zhengjia [1 ]
Yan, Chaogan [1 ]
Wang, Zhiqun [2 ]
Wang, Jinhui [1 ]
Xia, Mingrui [1 ]
Li, Kuncheng [2 ,3 ]
He, Yong [1 ]
机构
[1] Beijing Normal Univ, State Key Lab Cognit Neurosci & Learning, Beijing 100875, Peoples R China
[2] Capital Med Univ, Xuanwu Hosp, Dept Radiol, Beijing, Peoples R China
[3] Capital Med Univ, Minist Educ, Key Lab Neurodegenerat Dis, Beijing, Peoples R China
基金
北京市自然科学基金;
关键词
Alzheimer's disease; MRI; fMRI; ALFF; ReHo; Connectivity; Network; Connectome; VOXEL-BASED MORPHOMETRY; GRAY-MATTER LOSS; RESTING-STATE; FUNCTIONAL CONNECTIVITY; HUMAN BRAIN; PATTERN-CLASSIFICATION; WHITE-MATTER; MRI; DIAGNOSIS; NETWORKS;
D O I
10.1016/j.neuroimage.2011.10.003
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Increasing attention has recently been directed to the applications of pattern recognition and brain imaging techniques in the effective and accurate diagnosis of Alzheimer's disease (AD). However, most of the existing research focuses on the use of single-modal (e.g., structural or functional MRI) or single-level (e.g., brain local or connectivity metrics) biomarkers for the diagnosis of AD. In this study, we propose a methodological framework, called multi-modal imaging and multi-level characteristics with multi-classifier (M3), to discriminate patients with AD from healthy controls. This approach involved data analysis from two imaging modalities: structural MRI, which was used to measure regional gray matter volume, and resting-state functional MRI, which was used to measure three different levels of functional characteristics, including the amplitude of low-frequency fluctuations (ALFF), regional homogeneity (ReHo) and regional functional connectivity strength (RFCS). For each metric, we computed the values of ninety regions of interest derived from a prior atlas, which were then further trained using a multi-classifier based on four maximum uncertainty linear discriminant analysis base classifiers. The performance of this method was evaluated using leave-one-out cross-validation. Applying the M3 approach to the dataset containing 16 AD patients and 22 healthy controls led to a classification accuracy of 89.47% with a sensitivity of 87.50% and a specificity of 90.91%. Further analysis revealed that the most discriminative features for classification are predominantly involved in several default-mode (medial frontal gyrus, posterior cingulate gyrus, hippocampus and parahippocampal gyrus), occipital (fusiform gyrus, inferior and middle occipital gyrus) and subcortical (amygdale and pallidum of lenticular nucleus) regions. Thus, the M3 method shows promising classification performance by incorporating information from different imaging modalities and different functional properties, and it has the potential to improve the clinical diagnosis and treatment evaluation of AD. (C) 2011 Elsevier Inc. All rights reserved.
引用
收藏
页码:2187 / 2195
页数:9
相关论文
共 50 条
  • [1] Multi-level graph regularized robust multi-modal feature selection for Alzheimer's disease classification
    Zhang, Chao
    Fan, Wentao
    Li, Huaxiong
    Chen, Chunlin
    Knowledge-Based Systems, 2024, 293
  • [2] Multi-level graph regularized robust multi-modal feature selection for Alzheimer's disease classification
    Zhang, Chao
    Fan, Wentao
    Li, Huaxiong
    Chen, Chunlin
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [3] A robust multi-level sparse classifier with multi-modal feature extraction for face recognition
    Vishwakarma, Virendra P.
    Mishra, Gargi
    INTERNATIONAL JOURNAL OF APPLIED PATTERN RECOGNITION, 2019, 6 (01) : 76 - 102
  • [4] Deep Multi-Modal Discriminative and Interpretability Network for Alzheimer's Disease Diagnosis
    Zhu, Qi
    Xu, Bingliang
    Huang, Jiashuang
    Wang, Heyang
    Xu, Ruting
    Shao, Wei
    Zhang, Daoqiang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (05) : 1472 - 1483
  • [5] Multi-level Deep Correlative Networks for Multi-modal Sentiment Analysis
    CAI Guoyong
    LYU Guangrui
    LIN Yuming
    WEN Yimin
    Chinese Journal of Electronics, 2020, 29 (06) : 1025 - 1038
  • [6] Multi-modal discriminative dictionary learning for Alzheimer's disease and mild cognitive impairment
    Li, Qing
    Wu, Xia
    Xu, Lele
    Chen, Kewei
    Yao, Li
    Li, Rui
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2017, 150 : 1 - 8
  • [7] M3: using mask-attention and multi-scale for multi-modal brain MRI classification
    Kong, Guanqing
    Wu, Chuanfu
    Zhang, Zongqiu
    Yin, Chuansheng
    Qin, Dawei
    FRONTIERS IN NEUROINFORMATICS, 2024, 18
  • [8] M3L: Language-based Video Editing via Multi-Modal Multi-Level Transformers
    Fu, Tsu-Jui
    Wang, Xin Eric
    Grafton, Scott T.
    Eckstein, Miguel P.
    Wang, William Yang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10503 - 10512
  • [9] Multi-modal classification of Alzheimer's disease using nonlinear graph fusion
    Tong, Tong
    Gray, Katherine
    Gao, Qinquan
    Chen, Liang
    Rueckert, Daniel
    PATTERN RECOGNITION, 2017, 63 : 171 - 181
  • [10] Interoperable Multi-Modal Data Analysis Platform for Alzheimer's Disease Management
    Pang, Zhen
    Zhang, Shuhao
    Yang, Yun
    Qi, Jun
    Yang, Po
    2020 IEEE INTL SYMP ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, INTL CONF ON BIG DATA & CLOUD COMPUTING, INTL SYMP SOCIAL COMPUTING & NETWORKING, INTL CONF ON SUSTAINABLE COMPUTING & COMMUNICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2020), 2020, : 1321 - 1327