Vote-boosting ensembles

被引:42
作者
Sabzevari, Maryam [1 ]
Martinez-Munoz, Gonzalo [1 ]
Suarez, Alberto [1 ]
机构
[1] Univ Autonoma Madrid, Escuela Politecn Super, Dept Ingn Informat, C Francisco Tomas & Valiente 11, E-28049 Madrid, Spain
关键词
Ensemble learning; Boosting; Uncertainty-based emphasis; Robust classification; CLASSIFICATION; NOISE; CLASSIFIERS; ALGORITHMS; ADABOOST;
D O I
10.1016/j.patcog.2018.05.022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vote-boosting is a sequential ensemble learning method in which the individual classifiers are built on different weighted versions of the training data. To build a new classifier, the weight of each training instance is determined in terms of the degree of disagreement among the current ensemble predictions for that instance. For low class-label noise levels, especially when simple base learners are used, emphasis should be made on instances for which the disagreement rate is high. When more flexible classifiers are used and as the noise level increases, the emphasis on these uncertain instances should be reduced. In fact, at sufficiently high levels of class-label noise, the focus should be on instances on which the ensemble classifiers agree. The optimal type of emphasis can be automatically determined using cross validation. An extensive empirical analysis using the beta distribution as emphasis function illustrates that vote-boosting is an effective method to generate ensembles that are both accurate and robust. (C) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:119 / 133
页数:15
相关论文
共 64 条
  • [1] Abe N., 2006, Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, P767
  • [2] Boosting ensembles with controlled emphasis intensity
    Ahachad, Anas
    Alvarez-Perez, Lorena
    Figueiras-Vidal, Anibal R.
    [J]. PATTERN RECOGNITION LETTERS, 2017, 88 : 1 - 5
  • [3] Neighborhood Guided Smoothed Emphasis for Real Adaboost Ensembles
    Ahachad, Anas
    Omari, Adil
    Figueiras-Vidal, Anibal R.
    [J]. NEURAL PROCESSING LETTERS, 2015, 42 (01) : 155 - 165
  • [4] Alfaro E, 2013, J STAT SOFTW, V54, P1
  • [5] [Anonymous], 2017, UCI MACHINE LEARNING
  • [6] Building forests of local trees
    Armano, Giuliano
    Tamponi, Emanuele
    [J]. PATTERN RECOGNITION, 2018, 76 : 380 - 390
  • [7] An empirical comparison of voting classification algorithms: Bagging, boosting, and variants
    Bauer, E
    Kohavi, R
    [J]. MACHINE LEARNING, 1999, 36 (1-2) : 105 - 139
  • [8] Bouckaert R. R., EVALUATING REPLICABI, P3
  • [9] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [10] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32