A comparative study on large scale kernelized support vector machines

被引:0
作者
Daniel Horn
Aydın Demircioğlu
Bernd Bischl
Tobias Glasmachers
Claus Weihs
机构
[1] Technische Universität Dortmund,Fakultät Statistik
[2] Ruhr-Universität Bochum,Department of Statistics
[3] LMU München,undefined
来源
Advances in Data Analysis and Classification | 2018年 / 12卷
关键词
Support vector machine; Multi-objective optimization; Supervised learning; Machine learning; Large scale; Nonlinear SVM; Parameter tuning; 62-07 Data analysis;
D O I
暂无
中图分类号
学科分类号
摘要
Kernelized support vector machines (SVMs) belong to the most widely used classification methods. However, in contrast to linear SVMs, the computation time required to train such a machine becomes a bottleneck when facing large data sets. In order to mitigate this shortcoming of kernel SVMs, many approximate training algorithms were developed. While most of these methods claim to be much faster than the state-of-the-art solver LIBSVM, a thorough comparative study is missing. We aim to fill this gap. We choose several well-known approximate SVM solvers and compare their performance on a number of large benchmark data sets. Our focus is to analyze the trade-off between prediction error and runtime for different learning and accuracy parameter settings. This includes simple subsampling of the data, the poor-man’s approach to handling large scale problems. We employ model-based multi-objective optimization, which allows us to tune the parameters of learning machine and solver over the full range of accuracy/runtime trade-offs. We analyze (differences between) solvers by studying and comparing the Pareto fronts formed by the two objectives classification error and training time. Unsurprisingly, given more runtime most solvers are able to find more accurate solutions, i.e., achieve a higher prediction accuracy. It turns out that LIBSVM with subsampling of the data is a strong baseline. Some solvers systematically outperform others, which allows us to give concrete recommendations of when to use which solver.
引用
收藏
页码:867 / 883
页数:16
相关论文
共 50 条
[21]   Comparative study of support vector machines and random forests machine learning algorithms on credit operation [J].
Teles, Germanno ;
Rodrigues, Joel J. P. C. ;
Rabelo, Ricardo A. L. ;
Kozlov, Sergei A. .
SOFTWARE-PRACTICE & EXPERIENCE, 2021, 51 (12) :2492-2500
[22]   Parameter Tuning of Large Scale Support Vector Machines using Ensemble Learning with Applications to Imbalanced Data Sets [J].
Nakayama, Hirotaka ;
Yun, Yeboon ;
Uno, Yuki .
PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, :2815-2820
[23]   A Sparse L2-Regularized Support Vector Machines for Large-Scale Natural Language Learning [J].
Wu, Yu-Chieh ;
Lee, Yue-Shi ;
Yang, Jie-Chi ;
Yen, Show-Jane .
INFORMATION RETRIEVAL TECHNOLOGY, 2010, 6458 :340-+
[24]   Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures [J].
Arana-Daniel, Nancy ;
Gallegos, Alberto A. ;
Lopez-Franco, Carlos ;
Alanis, Alma Y. ;
Morales, Jacob ;
Lopez-Franco, Adriana .
EVOLUTIONARY BIOINFORMATICS, 2016, 12 :285-302
[25]   Comparative efficiency of algorithms based on support vector machines for binary classification [J].
Kadyrova N.O. ;
Pavlova L.V. .
Biophysics, 2015, 60 (1) :13-24
[26]   Support Vector Machines: Theory, Algorithms, and Applications [J].
Jabardi, Mohammed .
INFOCOMMUNICATIONS JOURNAL, 2025, 17 (01) :66-75
[27]   Optimal parameter selection in support vector machines [J].
Schittkowski, K. .
JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2005, 1 (04) :465-476
[28]   Semismooth support vector machines [J].
Michael C. Ferris ;
Todd S. Munson .
Mathematical Programming, 2004, 101 :185-204
[29]   Nested Support Vector Machines [J].
Lee, Gyemin ;
Scott, Clayton .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2010, 58 (03) :1648-1660
[30]   Distributed support vector machines [J].
Navia-Vazquez, A. ;
Gutierrez-Gonzalez, D. ;
Parrado-Hernandez, E. ;
Navarro-Abellan, J. J. .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04) :1091-1097