A Convergent Hybrid Decomposition Algorithm Model for SVM Training

被引:17
|
作者
Lucidi, Stefano [1 ]
Palagi, Laura [1 ]
Risi, Arnaldo [2 ]
Sciandrone, Marco [3 ]
机构
[1] Univ Roma La Sapienza, Dipartimento Informat & Sistemist Antonio Ruberti, I-00185 Rome, Italy
[2] CNR, Ist Anal Sistemi & Informat Antonio Ruberti, I-00185 Rome, Italy
[3] Univ Florence, Dipartimento Sistemi & Informat, I-50139 Florence, Italy
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2009年 / 20卷 / 06期
关键词
Caching technique; convergent hybrid schemes; large scale quadratic optimization; maximum violating pair; support vector machines (SVMs); working set selection;
D O I
10.1109/TNN.2009.2020908
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach.
引用
收藏
页码:1055 / 1060
页数:6
相关论文
共 50 条
  • [1] Fast SVM training algorithm with decomposition on very large data sets
    Dong, JX
    Krzyzak, A
    Suen, CY
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (04) : 603 - 618
  • [2] Hybrid optimization of feature selection and SVM training model
    Tran, Quang-Anh
    Zhang, Qianli
    Li, Xing
    Qinghua Daxue Xuebao/Journal of Tsinghua University, 2004, 44 (01): : 9 - 12
  • [3] An Alternating Genetic Algorithm for Selecting SVM Model and Training Set
    Kawulok, Michal
    Nalepa, Jakub
    Dudzik, Wojciech
    PATTERN RECOGNITION (MCPR 2017), 2017, 10267 : 94 - 104
  • [4] A fast SVM training algorithm
    Dong, JX
    Suen, CY
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2003, 17 (03) : 367 - 384
  • [5] A fast SVM training algorithm
    Dong, JX
    Krzyzak, A
    Suen, CY
    PATTERN RECOGNITON WITH SUPPORT VECTOR MACHINES, PROCEEDINGS, 2002, 2388 : 53 - 67
  • [6] A hybrid method for speeding SVM training
    Zeng, Zhi-Qiang
    Gao, Ji
    Guo, Hang
    NEXT GENERATION INFORMATION TECHNOLOGIES AND SYSTEMS, PROCEEDINGS, 2006, 4032 : 312 - 320
  • [7] A new fast training algorithm for SVM
    He, Zhi-Jie
    Jin, Lian-Wen
    PROCEEDINGS OF 2008 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2008, : 3451 - 3456
  • [8] A convergent decomposition algorithm for support vector machines
    Lucidi, S.
    Palagi, L.
    Risi, A.
    Sciandrone, M.
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2007, 38 (02) : 217 - 234
  • [9] A convergent decomposition algorithm for support vector machines
    S. Lucidi
    L. Palagi
    A. Risi
    M. Sciandrone
    Computational Optimization and Applications, 2007, 38 : 217 - 234
  • [10] A new iterative algorithm training SVM
    Zhou, Shuisheng
    Liu, Hongwei
    Ye, Feng
    Zhou, Lihua
    OPTIMIZATION METHODS & SOFTWARE, 2009, 24 (06): : 913 - 932