Clustering-based undersampling in class-imbalanced data

被引:548
作者
Lin, Wei-Chao [1 ]
Tsai, Chih-Fong [2 ]
Hu, Ya-Han [3 ]
Jhang, Jing-Shang [2 ]
机构
[1] Asia Univ, Dept Comp Sci & Informat Engn, Taichung, Taiwan
[2] Natl Cent Univ, Dept Informat Management, Taoyuan, Taiwan
[3] Natl Chung Cheng Univ, Dept Informat Management, Chiayi, Taiwan
关键词
Class imbalance; Imbalanced data; Machine learning; Clustering; Classifier ensembles; CLASSIFICATION; PREDICTION;
D O I
10.1016/j.ins.2017.05.008
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Class imbalance is often a problem in various real-world data sets, where one class (i.e. the minority class) contains a small number of data points and the other (i.e. the majority class) contains a large number of data points. It is notably difficult to develop an effective model using current data mining and machine learning algorithms without considering data preprocessing to balance the imbalanced data sets. Random undersampling and over sampling have been used in numerous studies to ensure that the different classes contain the same number of data points. A classifier ensemble (i.e. a structure containing several classifiers) can be trained on several different balanced data sets for later classification purposes. In this paper, we introduce two undersampling strategies in which a clustering technique is used during the data preprocessing step. Specifically, the number of clusters in the majority class is set to be equal to the number of data points in the minority class. The first strategy uses the cluster centers to represent the majority class, whereas the second strategy uses the nearest neighbors of the cluster centers. A further study was conducted to examine the effect on performance of the addition or deletion of 5 to 10 cluster centers in the majority class. The experimental results obtained using 44 small-scale and 2 large-scale data sets revealed that the clustering-based undersampling approach with the second strategy outperformed five state-of-the-art approaches. Specifically, this approach combined with a single multilayer perceptron classifier and C4.5 decision tree classifier ensembles delivered optimal performance over both small-and large-scale data sets. (C) 2017 Elsevier Inc. All rights reserved.
引用
收藏
页码:17 / 26
页数:10
相关论文
共 40 条
[1]   To Combat Multi-Class Imbalanced Problems by Means of Over-Sampling Techniques [J].
Abdi, Lida ;
Hashemi, Sattar .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2016, 28 (01) :238-251
[2]  
김태훈, 2015, [Journal of Intelligence and Information Systems, 지능정보연구], V21, P173
[3]  
[Anonymous], 2004, SIGKDD EXPLORATIONS
[4]  
[Anonymous], 2013, INT J COMPUT SCI NET, DOI DOI 10.1109/SIU.2013.6531574
[5]  
[Anonymous], 2006, COMPUT SCI ENG
[6]  
[Anonymous], 1997, P 14 INT C ONMACHINE
[7]   New applications of ensembles of classifiers [J].
Barandela, R ;
Sánchez, JS ;
Valdovinos, RM .
PATTERN ANALYSIS AND APPLICATIONS, 2003, 6 (03) :245-256
[8]  
Batista GE., 2004, ACM SIGKDD EXPL NEWS, V6, P20, DOI DOI 10.1145/1007730.1007735
[9]   Classifying imbalanced data sets using similarity based hierarchical decomposition [J].
Beyan, Cigdem ;
Fisher, Robert .
PATTERN RECOGNITION, 2015, 48 (05) :1653-1672
[10]   Neighbourhood sampling in bagging for imbalanced data [J].
Blaszczynski, Jerzy ;
Stefanowski, Jerzy .
NEUROCOMPUTING, 2015, 150 :529-542