Diversity techniques improve the performance of the best imbalance learning ensembles

被引:147
作者
Diez-Pastor, Jose F. [1 ]
Rodriguez, Juan J. [1 ]
Garcia-Osorio, Cesar I. [1 ]
Kuncheva, Ludmila I. [2 ]
机构
[1] Univ Burgos, Burgos, Spain
[2] Univ Bangor, Bangor, Gwynedd, Wales
关键词
Classifier ensembles; Imbalanced data sets; SMOTE; Undersampling; Rotation forest; Diversity; DECISION TREES; DATA-SETS; CLASSIFICATION; SMOTE; ALGORITHMS; RULES; TESTS;
D O I
10.1016/j.ins.2015.07.025
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many real-life problems can be described as unbalanced, where the number of instances belonging to one of the classes is much larger than the numbers in other classes. Examples are spam detection, credit card fraud detection or medical diagnosis. Ensembles of classifiers have acquired popularity in this kind of problems for their ability to obtain better results than individual classifiers. The most commonly used techniques by those ensembles especially designed to deal with imbalanced problems are for example Re-weighting, Oversampling and Undersampling. Other techniques, originally intended to increase the ensemble diversity, have not been systematically studied for their effect on imbalanced problems. Among these are Random Oracles, Disturbing Neighbors, Random Feature Weights or Rotation Forest. This paper presents an overview and an experimental study of various ensemble-based methods for imbalanced problems, the methods have been tested in its original form and in conjunction with several diversity-increasing techniques, using 84 imbalanced data sets from two well known repositories. This paper shows that these diversity-increasing techniques significantly improve the performance of ensemble methods for imbalanced problems and provides some ideas about when it is more convenient to use these diversifying techniques. (C) 2015 Elsevier Inc. All rights reserved.
引用
收藏
页码:98 / 117
页数:20
相关论文
共 71 条
  • [1] Alcalá-Fdez J, 2011, J MULT-VALUED LOG S, V17, P255
  • [2] Bache K, 2013, UCI machine learning repository
  • [3] New applications of ensembles of classifiers
    Barandela, R
    Sánchez, JS
    Valdovinos, RM
    [J]. PATTERN ANALYSIS AND APPLICATIONS, 2003, 6 (03) : 245 - 256
  • [4] Batista GEAPA, 2004, Sigkdd Explorations, V6, P20, DOI [10.1145/1007730.1007735, DOI 10.1145/1007730.1007735, 10.1145/1007730.1007735.2]
  • [5] An empirical comparison of voting classification algorithms: Bagging, boosting, and variants
    Bauer, E
    Kohavi, R
    [J]. MACHINE LEARNING, 1999, 36 (1-2) : 105 - 139
  • [6] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [7] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [8] Identifying mislabeled training data
    Brodley, CE
    Friedl, MA
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1999, 11 : 131 - 167
  • [9] Bunkhumpornpat C, 2009, LECT NOTES ARTIF INT, V5476, P475, DOI 10.1007/978-3-642-01307-2_43
  • [10] SMOTE: Synthetic minority over-sampling technique
    Chawla, Nitesh V.
    Bowyer, Kevin W.
    Hall, Lawrence O.
    Kegelmeyer, W. Philip
    [J]. 2002, American Association for Artificial Intelligence (16)