Analysis of sampling techniques for imbalanced data: An n=648 ADNI study

被引:146
作者
Dubey, Rashmi [1 ,2 ]
Zhou, Jiayu [1 ,2 ]
Wang, Yalin [1 ]
Thompson, Paul M. [3 ]
Ye, Jieping [1 ,2 ]
机构
[1] Arizona State Univ, Sch Comp Informat & Decis Syst Engn, Tempe, AZ 85287 USA
[2] Arizona State Univ, Biodesign Inst, Ctr Evolutionary Med & Informat, Tempe, AZ 85287 USA
[3] Univ Calif Los Angeles, Sch Med, Imaging Genet Ctr, Lab Neuro Imaging, Los Angeles, CA USA
基金
美国国家科学基金会; 美国国家卫生研究院; 加拿大健康研究院;
关键词
Alzheimer's disease; Classification; Imbalanced data; Undersampling; Oversampling; Feature selection; ALZHEIMERS-DISEASE; CLASSIFICATION; MRI; HIPPOCAMPAL; ASSOCIATION; PREDICTION; BIOMARKERS; SIGNATURE; DIAGNOSIS; ATROPHY;
D O I
10.1016/j.neuroimage.2013.10.005
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer's disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and undersampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1) a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2) sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. (C) 2013 Elsevier Inc. All rights reserved.
引用
收藏
页码:220 / 241
页数:22
相关论文
共 81 条
[1]   Applying support vector machines to imbalanced datasets [J].
Akbani, R ;
Kwek, S ;
Japkowicz, N .
MACHINE LEARNING: ECML 2004, PROCEEDINGS, 2004, 3201 :39-50
[2]  
Alzheimer's Association, 2012, ALZ DIS FACTS FIG
[3]  
[Anonymous], ICML 2003 WORKSH LEA
[4]  
[Anonymous], 1991, ELEMENTS INFORM THEO, DOI [DOI 10.1002/0471200611, 10.1002/0471200611]
[5]  
[Anonymous], 2003, ICML-2003 Workshop on Learning from Imbalanced Data Sets II
[6]  
[Anonymous], 2007, ICML, DOI DOI 10.1145/1273496.1273614
[7]  
[Anonymous], 2003, WORKSH LEARN IMB DAT
[8]   Age-related myelin breakdown: a developmental model of cognitive decline and Alzheimer's disease [J].
Bartzokis, G .
NEUROBIOLOGY OF AGING, 2004, 25 (01) :5-18
[9]   Statistical analysis of longitudinal neuroimage data with Linear Mixed Effects models [J].
Bernal-Rusiel, Jorge L. ;
Greve, Douglas N. ;
Reuter, Martin ;
Fischl, Bruce ;
Sabuncu, Mert R. .
NEUROIMAGE, 2013, 66 :249-260
[10]  
Bradford J. P., 1998, Machine Learning: ECML-98. 10th European Conference on Machine Learning. Proceedings, P131, DOI 10.1007/BFb0026682