Analysis of sampling techniques for imbalanced data: An n=648 ADNI study

被引:146
作者
Dubey, Rashmi [1 ,2 ]
Zhou, Jiayu [1 ,2 ]
Wang, Yalin [1 ]
Thompson, Paul M. [3 ]
Ye, Jieping [1 ,2 ]
机构
[1] Arizona State Univ, Sch Comp Informat & Decis Syst Engn, Tempe, AZ 85287 USA
[2] Arizona State Univ, Biodesign Inst, Ctr Evolutionary Med & Informat, Tempe, AZ 85287 USA
[3] Univ Calif Los Angeles, Sch Med, Imaging Genet Ctr, Lab Neuro Imaging, Los Angeles, CA USA
基金
美国国家科学基金会; 美国国家卫生研究院; 加拿大健康研究院;
关键词
Alzheimer's disease; Classification; Imbalanced data; Undersampling; Oversampling; Feature selection; ALZHEIMERS-DISEASE; CLASSIFICATION; MRI; HIPPOCAMPAL; ASSOCIATION; PREDICTION; BIOMARKERS; SIGNATURE; DIAGNOSIS; ATROPHY;
D O I
10.1016/j.neuroimage.2013.10.005
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer's disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and undersampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1) a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2) sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. (C) 2013 Elsevier Inc. All rights reserved.
引用
收藏
页码:220 / 241
页数:22
相关论文
共 50 条
[41]   Handling Imbalanced Data for Real-Time Crash Prediction: Application of Boosting and Sampling Techniques [J].
Ariannezhad, Amin ;
Karimpour, Abolfazl ;
Qin, Xiao ;
Wu, Yao-Jan ;
Salmani, Yasamin .
JOURNAL OF TRANSPORTATION ENGINEERING PART A-SYSTEMS, 2021, 147 (03)
[42]   Real-value negative selection over-sampling for imbalanced data set learning [J].
Tao, Xinmin ;
Li, Qing ;
Ren, Chao ;
Guo, Wenjie ;
Li, Chenxi ;
He, Qing ;
Liu, Rui ;
Zou, Junrong .
EXPERT SYSTEMS WITH APPLICATIONS, 2019, 129 :118-134
[43]   Entropy-based hybrid sampling ensemble learning for imbalanced data [J].
Dongdong, Li ;
Ziqiu, Chi ;
Bolu, Wang ;
Zhe, Wang ;
Hai, Yang ;
Wenli, Du .
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2021, 36 (07) :3039-3067
[44]   A New Sampling Approach for Classification of Imbalanced Data sets with High Density [J].
Jia Pengfei ;
Zhang Chunkai ;
He Zhenyu .
2014 INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2014, :217-222
[45]   Imbalanced data preprocessing techniques for machine learning: a systematic mapping study [J].
de Vargas, Vitor Werner ;
Schneider Aranda, Jorge Arthur ;
Costa, Ricardo dos Santos ;
da Silva Pereira, Paulo Ricardo ;
Victoria Barbosa, Jorge Luis .
KNOWLEDGE AND INFORMATION SYSTEMS, 2023, 65 (01) :31-57
[46]   Semi-supervised Classification Based Mixed Sampling for Imbalanced Data [J].
Zhao, Jianhua ;
Liu, Ning .
OPEN PHYSICS, 2019, 17 (01) :975-983
[47]   Enhancing techniques for learning decision trees from imbalanced data [J].
Chaabane, Ikram ;
Guermazi, Radhouane ;
Hammami, Mohamed .
ADVANCES IN DATA ANALYSIS AND CLASSIFICATION, 2020, 14 (03) :677-745
[48]   Feature selection and its combination with data over-sampling for multi-class imbalanced datasets [J].
Tsai, Chih-Fong ;
Chen, Kuan-Chen ;
Lin, Wei -Chao .
APPLIED SOFT COMPUTING, 2024, 153
[49]   A heuristic-based hybrid sampling method using a combination of SMOTE and ENN for imbalanced health data [J].
Nizam-Ozogur, Hatice ;
Orman, Zeynep .
EXPERT SYSTEMS, 2024, 41 (08)
[50]   Uncertainty Based Under-Sampling for Learning Naive Bayes Classifiers Under Imbalanced Data Sets [J].
Aridas, Christos K. ;
Karlos, Stamatis ;
Kanas, Vasileios G. ;
Fazakis, Nikos ;
Kotsiantis, Sotiris B. .
IEEE ACCESS, 2020, 8 :2122-2133