Sparse algorithm for robust LSSVM in primal space

被引:39
作者
Chen, Li [1 ,2 ]
Zhou, Shuisheng [1 ]
机构
[1] Xidian Univ, Sch Math & Stat, 266 Xinglong Sect,Xifeng Rd, Xian, Shaanxi, Peoples R China
[2] Zhongyuan Technol Univ, Dept Basic Sci, Coll Informat & Business, 41 Zhongyuan Middle Rd, Zhengzhou, Henan, Peoples R China
基金
中国国家自然科学基金;
关键词
Primal LSSVM; Sparse solution; Re-weighted LSSVM; Low-rank approximation; Outliers; SUPPORT VECTOR MACHINES; NYSTROM METHOD; CLASSIFICATION; REPRESENTATION; CONVERGENCE; REGRESSION; NONCONVEX;
D O I
10.1016/j.neucom.2017.10.011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As having the closed form solution, the least squares support vector machine (LSSVM) has been widely used for classification and regression problems owing to its competitive performance compared with other types of SVMs. However, the LSSVM has two drawbacks: it is sensitive to outliers and its solution lacks sparseness. The robust LSSVM (R-LSSVM) partially overcomes the first drawback via its nonconvex truncated loss function, but it is unable to address the second drawback because its current algorithms produce dense solutions that are inefficient for training large-scale problems. In this paper, we interpret the robustness of the R-LSSVM from a re-weighted viewpoint and develop a primal R-LSSVM using the representer theorem. The new model may have a sparse solution. Then, we design a convergent sparse R-LSSVM (SR-LSSVM) algorithm to achieve a sparse solution of the primal R-LSSVM after obtaining a low-rank approximation of the kernel matrix. The new algorithm not only overcomes the two drawbacks of LSSVM simultaneously, it also has lower complexity than the existing algorithms. Therefore, it is very efficient at training large-scale problems. Numerous experimental results demonstrate that the SR-LSSVM can achieve better or comparable performance to other related algorithms in less training time, especially when used to train large-scale problems. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:2880 / 2891
页数:12
相关论文
共 46 条
  • [31] A Difference of Convex Functions Algorithm for Switched Linear Regression
    Tao Pham Dinh
    Hoai Minh Le
    Hoai An Le Thi
    Lauer, Fabien
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2014, 59 (08) : 2277 - 2282
  • [32] Valyon J., 2003, Periodica Polytechnica Electrical Engineering, V47, P229
  • [33] Van Gestel T., 2002, Least Squares Support Vector Machines
  • [34] Robust non-convex least squares loss function for regression with outliers
    Wang, Kuaini
    Zhong, Ping
    [J]. KNOWLEDGE-BASED SYSTEMS, 2014, 71 : 290 - 302
  • [35] Williams CKI, 2001, ADV NEUR IN, V13, P682
  • [36] Woodbury Max A., 1950, MEMORANDUM REP, V42, P336
  • [37] Coupled compressed sensing inspired sparse spatial-spectral LSSVM for hyperspectral image classification
    Yang, Lixia
    Yang, Shuyuan
    Li, Sujing
    Zhang, Rui
    Liu, Fang
    Jiao, Licheng
    [J]. KNOWLEDGE-BASED SYSTEMS, 2015, 79 : 80 - 89
  • [38] A robust least squares support vector machine for regression and classification with noise
    Yang, Xiaowei
    Tan, Liangjun
    He, Lifang
    [J]. NEUROCOMPUTING, 2014, 140 : 41 - 52
  • [39] You L., 2011, PROCEDIA ENG, V15, P1355, DOI [10.1016/j.proeng.2011.08.251, DOI 10.1016/j.proeng.2011.08.251, DOI 10.1016/J.PROENG.2011.08.251]
  • [40] The concave-convex procedure
    Yuille, AL
    Rangarajan, A
    [J]. NEURAL COMPUTATION, 2003, 15 (04) : 915 - 936