Class imbalance revisited: a new experimental setup to assess the performance of treatment methods

被引:133
作者
Prati, Ronaldo C. [1 ]
Batista, Gustavo E. A. P. A. [2 ]
Silva, Diego F. [2 ]
机构
[1] Univ Fed ABC, Ctr Matemat Comp & Cogn, Santo Andre, SP, Brazil
[2] Univ Sao Paulo, Inst Ciencias Matemat & Comp, BR-13560 Sao Carlos, Brazil
基金
巴西圣保罗研究基金会;
关键词
Class imbalance; Experimental setup; Sampling methods; CONFIDENCE-INTERVALS; TESTS; SMOTE;
D O I
10.1007/s10115-014-0794-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the last decade, class imbalance has attracted a huge amount of attention from researchers and practitioners. Class imbalance is ubiquitous in Machine Learning, Data Mining and Pattern Recognition applications; therefore, these research communities have responded to such interest with literally dozens of methods and techniques. Surprisingly, there are still many fundamental open-ended questions such as "Are all learning paradigms equally affected by class imbalance?", "What is the expected performance loss for different imbalance degrees?" and "How much of the performance losses can be recovered by the treatment methods?". In this paper, we propose a simple experimental design to assess the performance of class imbalance treatment methods. This experimental setup uses real data set with artificially modified class distributions to evaluate classifiers in a wide range of class imbalance. We apply such experimental design in a large-scale experimental evaluation with 22 data set and seven learning algorithms from different paradigms. We also propose a statistical procedure aimed to evaluate the relative degradation and recoveries, based on confidence intervals. This procedure allows a simple yet insightful visualization of the results, as well as provide the basis for drawing statistical conclusions. Our results indicate that the expected performance loss, as a percentage of the performance obtained with the balanced distribution, is quite modest (below 5 %) for the most balanced distributions up to 10 % of minority examples. However, the loss tends to increase quickly for higher degrees of class imbalance, reaching 20 % for 1 % of minority class examples. Support Vector Machine is the classifier paradigm that is less affected by class imbalance, being almost insensitive to all but the most imbalanced distributions. Finally, we show that the treatment methods only partially recover the performance losses. On average, typically, about 30 % or less of the performance that was lost due to class imbalance was recovered by these methods.
引用
收藏
页码:247 / 270
页数:24
相关论文
共 36 条
[1]  
[Anonymous], TECHNICAL REPORT
[2]  
[Anonymous], 2004, ACM Sigkdd Explor. Newsl, DOI DOI 10.1145/1007730.1007736
[3]  
[Anonymous], 2010, UCI Machine Learning Repository
[4]  
[Anonymous], 2012, IEEE T SYST MAN CY C, DOI DOI 10.1109/TSMCC.2011.2161285
[5]  
[Anonymous], PATTERN RECOGNIT LET
[6]  
[Anonymous], 2003, WORKSH LEARN IMB DAT
[7]  
[Anonymous], 1994, MACHINE LEARNING NEU
[8]  
[Anonymous], PAPER
[9]  
Batista GEAPA, 2004, Sigkdd Explorations, V6, P20, DOI [10.1145/1007730.1007735, DOI 10.1145/1007730.1007735, 10.1145/1007730.1007735.2]
[10]   CONFIDENCE LIMITS FOR A RATIO USING WILCOXONS SIGNED RANK TEST [J].
BENNETT, BM .
BIOMETRICS, 1965, 21 (01) :231-+