Training soft margin support vector machines by simulated annealing: A dual approach

被引:20
作者
Dantas Dias, Madson L. [1 ]
Rocha Neto, Ajalmar R. [1 ]
机构
[1] Fed Inst Ceara IFCE, Dept Teleinformat, Av Treze Maio 2081, BR-60040215 Fortaleza, Ceara, Brazil
关键词
Support vector machines; Simulated annealing; Learning methods;
D O I
10.1016/j.eswa.2017.06.016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A theoretical advantage of support vector machines (SVM) is the empirical and structural risk minimization which balances the complexity of the model against its success at fitting the training data. Meta heuristics have mostly been used with support vector machines to either tune hyperparameters or to perform feature selection. In this paper, we present a new approach to obtain sparse support vector machines (SVM) based on simulated annealing (SA), named SATE. In our proposal, SA was used to solve the quadratic optimization problem that emerges from support vector machines rather than tune the hyperparameters. We have compared our proposal with sequential minimal optimization (SMO), kernel adatron (KA), a usual QP solver, as well as with recent Particle Swarm Optimization (PSO) and Genetic Algorithms(GA)-based versions. Generally speaking, one can infer that the SATE is equivalent to SMO in terms of accuracy and mean of support vectors and sparser than KA, QP, LPSO, and GA. SATE also has higher accuracies than the GA and PSO-based versions. Moreover, SATE successfully embedded the SVM constraints and provides a competitive classifier while maintaining its simplicity and high sparseness in the solution. (C) 2017 Elsevier Ltd. All rights reserved.
引用
收藏
页码:157 / 169
页数:13
相关论文
共 26 条