A new boosting design of Support Vector Machine classifiers

被引:15
作者
Mayhua-Lopez, Efrain [1 ]
Gomez-Verdejo, Vanessa [2 ]
Figueiras-Vidal, Anibal R. [2 ]
机构
[1] Univ Catolica San Pablo, Arequipa, Peru
[2] Univ Carlos III Madrid, Dept Signal Theory & Commun, Madrid 28911, Spain
关键词
Real AdaBoost; Subsampling; Support Vector Machines; Linear programming; Ensemble classifiers; NEURAL-NETWORKS; CLASSIFICATION; ADABOOST; ALGORITHMS; ENSEMBLES; MODEL;
D O I
10.1016/j.inffus.2014.10.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Boosting algorithms pay attention to the particular structure of the training data when learning, by means of iteratively emphasizing the importance of the training samples according to their difficulty for being correctly classified. If common kernel Support Vector Machines (SVMs) are used as basic learners to construct a Real AdaBoost ensemble, the resulting ensemble can be easily compacted into a monolithic architecture by simply combining the weights that correspond to the same kernels when they appear in different learners, avoiding to increase the operation computational effort for the above potential advantage. This way, the performance advantage that boosting provides can be obtained for monolithic SVMs, i.e., without paying in classification computational effort because many learners are needed. However, SVMs are both stable and strong, and their use for boosting requires to unstabilize and to weaken them. Yet previous attempts in this direction show a moderate success. In this paper, we propose a combination of a new and appropriately designed subsampling process and an SVM algorithm which permits sparsity control to solve the difficulties in boosting SVMs for obtaining improved performance designs. Experimental results support the effectiveness of the approach, not only in performance, but also in compactness of the resulting classifiers, as well as that combining both design ideas is needed to arrive to these advantageous designs. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:63 / 71
页数:9
相关论文
共 63 条
[1]  
Aachad A., 2014, NEURAL PROCESS LETT, V10, P1
[2]  
[Anonymous], 2004, KERNEL METHODS PATTE
[3]  
[Anonymous], 2010, UCI Machine Learning Repository
[4]  
[Anonymous], 2001, J. Am. Stat. Assoc.
[5]   An empirical comparison of voting classification algorithms: Bagging, boosting, and variants [J].
Bauer, E ;
Kohavi, R .
MACHINE LEARNING, 1999, 36 (1-2) :105-139
[6]  
Bishop C.M., 2006, PATTERN RECOGN, P332
[7]  
Bradley P. S., 1998, Machine Learning. Proceedings of the Fifteenth International Conference (ICML'98), P82
[8]   Prediction games and arcing algorithms [J].
Breiman, L .
NEURAL COMPUTATION, 1999, 11 (07) :1493-1517
[9]   An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization [J].
Dietterich, TG .
MACHINE LEARNING, 2000, 40 (02) :139-157
[10]  
Drucker H., 1993, International Journal of Pattern Recognition and Artificial Intelligence, V7, P705, DOI 10.1142/S0218001493000352