Robustness and Regularization of Support Vector Machines

被引:0
作者
Xu, Huan [1 ]
Caramanis, Constantine [2 ]
Mannor, Shie [1 ,3 ]
机构
[1] McGill Univ, Dept Elect & Comp Engn, Montreal, PQ H3A 2A7, Canada
[2] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[3] Dept Elect Engn, Technion, Israel
关键词
robustness; regularization; generalization; kernel; support vector machine; CONSISTENCY; STABILITY;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization provides a robust optimization interpretation for the success of regularized SVMs. We use this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well.
引用
收藏
页码:1485 / 1510
页数:26
相关论文
共 46 条
  • [41] Robust support vector machines for classification and computational issues
    Trafalis, T. B.
    Gilbert, R. C.
    [J]. OPTIMIZATION METHODS & SOFTWARE, 2007, 22 (01) : 187 - 198
  • [42] van der Vaart A. W., 2000, WEAK CONVERGENCE EMP
  • [43] Vapnik V, 1974, Theory of pattern recognition
  • [44] Vapnik V. N., 1991, Pattern Recognition and Image Analysis, V1, P284
  • [45] VAPNIK VN, 1963, AUTOMAT REM CONTR, V27, P744
  • [46] XU H, 2009, ADV NEURAL INFORM PR, V21, P1801