An empirical evaluation of sampling methods for the classification of imbalanced data

被引:45
作者
Kim, Misuk [1 ]
Hwang, Kyu-Baek [1 ]
机构
[1] Soongsil Univ, Grad Sch, Dept Comp Sci & Engn, Seoul, South Korea
关键词
DATA SETS; PERFORMANCE; CLASSIFIERS; SMOTE;
D O I
10.1371/journal.pone.0271260
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In numerous classification problems, class distribution is not balanced. For example, positive examples are rare in the fields of disease diagnosis and credit card fraud detection. General machine learning methods are known to be suboptimal for such imbalanced classification. One popular solution is to balance training data by oversampling the underrepresented (or undersampling the overrepresented) classes before applying machine learning algorithms. However, despite its popularity, the effectiveness of sampling has not been rigorously and comprehensively evaluated. This study assessed combinations of seven sampling methods and eight machine learning classifiers (56 varieties in total) using 31 datasets with varying degrees of imbalance. We used the areas under the precision-recall curve (AUPRC) and receiver operating characteristics curve (AUROC) as the performance measures. The AUPRC is known to be more informative for imbalanced classification than the AUROC. We observed that sampling significantly changed the performance of the classifier (paired t-tests P < 0.05) only for few cases (12.2% in AUPRC and 10.0% in AUROC). Surprisingly, sampling was more likely to reduce rather than improve the classification performance. Moreover, the adverse effects of sampling were more pronounced in AUPRC than in AUROC. Among the sampling methods, undersampling performed worse than others. Also, sampling was more effective for improving linear classifiers. Most importantly, we did not need sampling to obtain the optimal classifier for most of the 31 datasets. In addition, we found two interesting examples in which sampling significantly reduced AUPRC while significantly improving AUROC (paired t-tests P < 0.05). In conclusion, the applicability of sampling is limited because it could be ineffective or even harmful. Furthermore, the choice of the performance measure is crucial for decision making. Our results provide valuable insights into the effect and characteristics of sampling for imbalanced classification.
引用
收藏
页数:22
相关论文
共 53 条
[1]  
Batista Gustavo APA, 2004, ACM SIGKDD Explor Newsl, V6, P20, DOI [10.1145/1007730.1007735, DOI 10.1145/1007730.1007735]
[2]   Improving the performance of Naive Bayes multinomial in e-mail foldering by introducing distribution-based balance of datasets [J].
Bermejo, Pablo ;
Gamez, Jose A. ;
Puerta, Jose M. .
EXPERT SYSTEMS WITH APPLICATIONS, 2011, 38 (03) :2072-2080
[3]  
Bolton RJ, 2002, STAT SCI, V17, P235
[4]   Robust twin bounded support vector machines for outliers and imbalanced data [J].
Borah, Parashjyoti ;
Gupta, Deepak .
APPLIED INTELLIGENCE, 2021, 51 (08) :5314-5343
[5]   A systematic study of the class imbalance problem in convolutional neural networks [J].
Buda, Mateusz ;
Maki, Atsuto ;
Mazurowski, Maciej A. .
NEURAL NETWORKS, 2018, 106 :249-259
[6]  
Chawla N. V., 2004, SIGKDD Explor. Newsl., V6, P1
[7]   Automatically countering imbalance and its empirical relationship to cost [J].
Chawla, Nitesh V. ;
Cieslak, David A. ;
Hall, Lawrence O. ;
Joshi, Ajay .
DATA MINING AND KNOWLEDGE DISCOVERY, 2008, 17 (02) :225-252
[8]   SMOTE: Synthetic minority over-sampling technique [J].
Chawla, Nitesh V. ;
Bowyer, Kevin W. ;
Hall, Lawrence O. ;
Kegelmeyer, W. Philip .
2002, American Association for Artificial Intelligence (16)
[9]   Calibrating Probability with Undersampling for Unbalanced Classification [J].
Dal Pozzolo, Andrea ;
Caelen, Olivier ;
Johnson, Reid A. ;
Bontempi, Gianluca .
2015 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2015, :159-166
[10]  
Davis J., 2006, P 23 INT C MACH LEAR, P233, DOI DOI 10.1145/1143844.1143874