An Ensemble Learning Approach with Gradient Resampling for Class-Imbalance Problems

被引:11
作者
Zhao, Hongke [1 ,2 ]
Zhao, Chuang [1 ,2 ]
Zhang, Xi [1 ,2 ,3 ]
Liu, Nanlin [1 ,2 ]
Zhu, Hengshu [4 ]
Liu, Qi [5 ]
Xiong, Hui [6 ]
机构
[1] Tianjin Univ, Coll Management & Econ, Tianjin 300000, Peoples R China
[2] Tianjin Univ, Lab Computat & Analyt Complex Management Syst, CACMS, Tianjin 300000, Peoples R China
[3] Beijing Inst Technol, Sch Management & Econ, Beijing 10081, Peoples R China
[4] BOSS Zhipin, Career Sci Lab, Beijing 100000, Peoples R China
[5] Univ Sci & Technol China, Anhui Prov Key Lab Big Data Anal & Applict, Hefei 230000, Anhui, Peoples R China
[6] Hong Kong Univ Sci & Technol, Guangzhou 510000, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
class-imbalance learning; ensemble learning; under-sampling strategy; gradient distribution; SMOTE; CLASSIFICATION; ALGORITHM;
D O I
10.1287/ijoc.2023.1274
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Imbalanced classification is widely referred in many real-world applications and has been extensively studied. Most existing algorithms consider alleviating the imbalance by sampling or guiding ensemble learners with punishments. The combination of ensemble learning and sampling strategy at class level has achieved great progress. Actually, specific hard examples have little benefit for model learning and even degrade the performance. From the view of identifying classification difficulty of samples, one important motivation is to design algorithms to finely equip different samples with progressive learning. Unfortunately, how to perfectly configure the sampling and learning strategies under ensemble principles at the sample level remains a research gap. In this paper, we propose a new view from the sample level rather than class level in existing studies. We design an ensemble approach in pipe with sample-level gradient resampling, that is, balanced cascade with filters (BCWF). Before that, as a preliminary exploration, we first design a hard examples mining algorithm to explore the gradient distribution of classification difficulty of samples and identify the hard examples. Specifically, BCWF uses an under-sampling strategy and a boosting manner to train T predictive classifiers and reidentify hard examples. In BCWF, moreover, we design two types of filters: the first is assembled with a hard filter (BCWF_h), whereas the second is assembled with a soft filter (BCWF_s). In each round of boosting, BCWF_h strictly removes a gradient/set of the hardest examples from both classes, whereas BCWF_s removes a larger number of harder and easy examples simultaneously for final balanced-class retention. Consequently, the well-trained T predictive classifiers can be used with two ensemble voting strategies: average probability and majority vote. To evaluate the proposed approach, we conduct intensive experiments on 10 benchmark data sets and apply our algorithms to perform default user detection on a real-world peer to peer lending data set. The experimental results fully demonstrate the effectiveness and the managerial implications of our approach when compared with 11 competitive algorithms.
引用
收藏
页码:747 / 763
页数:18
相关论文
共 75 条