Validation of average error rate over classifiers

被引:4
作者
Bax, E [1 ]
机构
[1] CALTECH, Dept Comp Sci, Pasadena, CA 91125 USA
关键词
machine learning; Vapnik-Chervonenkis; validation;
D O I
10.1016/S0167-8655(97)00160-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We examine methods to estimate the average and variance of test error rates over a set of classifiers. We begin with the process of drawing a classifier at random for each example. Given validation data, the average test error rate can be estimated as if validating a single classifier. Given the test example inputs, the variance tan be computed exactly. Next, we consider the process of drawing a classifier at random and using it on all examples. Once again, the expected test error rate can be validated as if validating a single classifier. However, the variance must be estimated by validating all classifiers, which yields loose or uncertain bounds. (C) 1998 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:127 / 132
页数:6
相关论文
共 10 条
[1]  
[Anonymous], NEURAL COMPUT
[2]  
[Anonymous], ARTIFICIAL NEURAL NE
[3]  
BAX E, 1997, CALTECHCSTR9713
[4]  
BAX E, 1997, CALTECHCSTR9708
[5]  
BISHOP CM, 1995, NEURAL NETWORKS PATT, P364
[6]  
FELLER W, 1968, INTRO PROBABILITY TH, P128
[7]  
HOEFFDING W, 1963, AM STAT ASS J, P13
[8]   HIERARCHICAL MIXTURES OF EXPERTS AND THE EM ALGORITHM [J].
JORDAN, MI ;
JACOBS, RA .
NEURAL COMPUTATION, 1994, 6 (02) :181-214
[9]  
VAPNIK VN, 1982, ESTIMATION DEPENDENC, P31
[10]   STACKED GENERALIZATION [J].
WOLPERT, DH .
NEURAL NETWORKS, 1992, 5 (02) :241-259