Bayesian approach to feature selection and parameter tuning for support vector machine classifiers

被引:53
作者
Gold, C [1 ]
Holub, A
Sollich, P
机构
[1] CALTECH, Pasadena, CA 91125 USA
[2] Kings Coll London, Dept Math, London WC2R 2LS, England
关键词
D O I
10.1016/j.neunet.2005.06.044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A Bayesian point of view of SVM classifiers allows the definition of a quantity analogous to the evidence in probabilistic models. By maximizing this one can systematically tune hyperparameters and, via automatic relevance determination (ARD), select relevant input features. Evidence gradients are expressed as averages over the associated posterior and can be approximated using Hybrid Monte Carlo (HMC) sampling. We describe how a Nystrom approximation of the Gram matrix can be used to speed up sampling times significantly while maintaining almost unchanged classification accuracy. In experiments on classification problems with a significant number of irrelevant features this approach to ARD can give a significant improvement in classification performance over more traditional, non-ARD, SVM systems. The final tuned hyperparameter values provide a useful criterion for pruning irrelevant features, and we define a measure of relevance with which to determine systematically how many features should be removed. This use of ARD for hard feature selection can improve classification accuracy in non-ARD SVMs. In the majority of cases, however, we find that in data sets constructed by human domain experts the performance of non-ARD SVMs is largely insensitive to the presence of some less relevant features. Eliminating such features via ARD then does not improve classification accuracy, but leads to impressive reductions in the number of features required, by up to 75%.(1) (c) 2005 Elsevier Ltd. All rights reserved.
引用
收藏
页码:693 / 701
页数:9
相关论文
共 25 条
[1]  
[Anonymous], CRGTR931 U TOR
[2]   Transforming supervised classifiers for feature extraction [J].
Bursteinas, B ;
Long, JA .
12TH IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2000, :274-280
[3]   Choosing multiple parameters for support vector machines [J].
Chapelle, O ;
Vapnik, V ;
Bousquet, O ;
Mukherjee, S .
MACHINE LEARNING, 2002, 46 (1-3) :131-159
[4]   Bayesian trigonometric support vector classifier [J].
Chu, W ;
Keerthi, SS ;
Ong, CJ .
NEURAL COMPUTATION, 2003, 15 (09) :2227-2254
[5]  
Cristianini N., 2000, Intelligent Data Analysis: An Introduction, DOI 10.1017/CBO9780511801389
[6]   Solving the multiple instance problem with axis-parallel rectangles [J].
Dietterich, TG ;
Lathrop, RH ;
LozanoPerez, T .
ARTIFICIAL INTELLIGENCE, 1997, 89 (1-2) :31-71
[7]   Efficient SVM training using low-rank kernel representations [J].
Fine, S ;
Scheinberg, K .
JOURNAL OF MACHINE LEARNING RESEARCH, 2002, 2 (02) :243-264
[8]  
Fröhlich H, 2004, IEEE IJCNN, P2041
[9]  
GESTEL TV, 2002, P WORLD C COMP INT I, P2779
[10]   Model selection for support vector machine classification [J].
Gold, C ;
Sollich, P .
NEUROCOMPUTING, 2003, 55 (1-2) :221-249