Learning the kernel in Mahalanobis one-class support vector machines

被引:0
作者
Tsang, Ivor W. [1 ]
Kwok, James T. [1 ]
Li, Shutao [2 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Comp Sci, Clear Water Bay, Hong Kong, Hong Kong, Peoples R China
[2] Hunan Univ, Coll Elect & Informat Engn, Changsha, Hunan, Peoples R China
来源
2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10 | 2006年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we show that one-class SVMs can also utilize data covariance in a robust manner to improve performance. Furthermore, by constraining the desired kernel function as a convex combination of base kernels, we show that the weighting coefficients can be learned via quadraticany constrained quadratic programming (QCQP) or second order cone programming (SOCP) methods. Performance on both toy and real-world data sets show promising results. This paper thus offers another demonstration of the synergy between convex optimization and kernel methods.
引用
收藏
页码:1169 / +
页数:3
相关论文
共 14 条
[1]  
ALIZADEH F, 2001, 512001 RRR RUTG U RU
[2]  
[Anonymous], 1998, Encyclopedia of Biostatistics
[3]  
BOUSQUET O, 2003, ADV NEURAL INFORMATI, V15
[4]  
Chapelle O, 2001, ADV NEUR IN, V13, P416
[5]  
CRAMMER K, 2003, ADV NEURAL INFORMATI, V15
[6]  
CRISTIANINI N, 2002, ADV NEURAL INFORMATI, V14
[7]  
Lanckriet C.R.G., 2002, P 19 INT C MACH LEAR, P323, DOI DOI 10.1023/B:JODS.0000012018.62090.A7
[8]  
LANCKRIET G, 2003, ADV KEURAL INFORMATI, V15
[9]  
MANEVITZ LM, 2001, J MACHINE LEARNING R, V2, P139
[10]  
Nesterov Y., 1994, INTERIOR POINT POLYN