Sparse kernel partial least squares regression

被引:15
作者
Momma, M [1 ]
Bennett, KP
机构
[1] Rensselaer Polytech Inst, Dept Decis Sci & Engn Syst, Troy, NY 12180 USA
[2] Rensselaer Polytech Inst, Dept Math Sci, Troy, NY 12180 USA
来源
LEARNING THEORY AND KERNEL MACHINES | 2003年 / 2777卷
关键词
D O I
10.1007/978-3-540-45167-9_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Partial Least Squares Regression (PLS) and its kernel version (KPLS) have become competitive regression approaches. KPLS performs as well as or better than support vector regression (SVR) for moderately-sized problems with the advantages of simple implementation, less training cost, and easier tuning of parameters. Unlike SVR, KPLS requires manipulation of the full kernel matrix and the resulting regression function requires the full training data. In this paper we rigorously derive a sparse KPLS algorithm. The underlying KPLS algorithm is modified to maintain sparsity in all steps of the algorithm. The resulting nu-KPLS algorithm explicitly models centering and bias rather than using kernel centering. An epsilon-insensitive loss function is used to produce sparse solutions in the dual space. The final regression function for the nu-KPLS algorithm only requires a relatively small set of support vectors.
引用
收藏
页码:216 / 230
页数:15
相关论文
共 11 条
[1]   PLS regression methods [J].
Höskuldsson, Agnar .
Journal of Chemometrics, 1988, 2 (03) :211-228
[2]  
[Anonymous], 1966, MULTIVARIATE ANAL P
[3]  
[Anonymous], [No title captured]
[4]  
BENNETT K, 2003, P NATO ADV STUD I LE
[5]  
Blake C.L., 1998, UCI repository of machine learning databases
[6]  
Platt J., 1999, ADV KERNEL METHODS S
[7]  
Rosipal R., 2001, J MACHINE LEARNING R, V2, P97
[8]  
Scholkopf B., 1998, Advances in Kernel Methods-Support Vector Learning
[9]  
SCHOLKOPF B, 1999, ADV NEURAL INFO P SY, V11
[10]  
Vapnik VN., 1996, COMPUTATIONAL LEARNI