The reliability of estimated confidence intervals for classification error rates when only a single sample is available

被引:6
作者
Hanczar, Blaise [1 ]
Dougherty, Edward R. [2 ,3 ]
机构
[1] Univ Paris 05, LIPADE, F-75006 Paris, France
[2] Texas A&M Univ, Dept Elect & Comp Engn, College Stn, TX USA
[3] Translat Genom Res Inst, Computat Biol Div, Phoenix, AZ USA
关键词
Supervised learning; Error estimation; High dimension; Small sample setting; Confidence interval; CROSS-VALIDATION; MICROARRAY CLASSIFICATION; PREDICTION ERROR; PERFORMANCE; CLASSIFIERS; INFERENCE; CANCER;
D O I
10.1016/j.patcog.2012.09.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Error estimation accuracy is the salient issue regarding the validity of a classifier model. When samples are small, training-data-based error estimates tend to suffer from inaccuracy and quantification of error estimation accuracy is difficult. Numerous methods have been proposed for estimating confidence intervals for the true error based on the estimated error. This paper surveys proposed methods and quantifies their performance. Monte Carlo methods are used to obtain accurate estimates of the true confidence intervals and compare these to the intervals estimated from samples. We consider different error estimators and several proposed confidence-bound estimators. Both synthetic and real genomic data are employed. Our simulations show the majority of the confidence intervals methods have poor performance because of the difference of shape between true and estimated intervals. According to our results, the best estimation strategy is to use the 10-time 10-fold cross-validation with a confidence interval based on the standard deviation. (C) 2012 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1067 / 1077
页数:11
相关论文
共 30 条
[21]   Prediction error estimation: a comparison of resampling methods [J].
Molinaro, AM ;
Simon, R ;
Pfeiffer, RM .
BIOINFORMATICS, 2005, 21 (15) :3301-3307
[22]   Inference for the generalization error [J].
Nadeau, C ;
Bengio, Y .
MACHINE LEARNING, 2003, 52 (03) :239-281
[23]   Optimal convex error estimators for classification [J].
Sima, Chao ;
Dougherty, Edward R. .
PATTERN RECOGNITION, 2006, 39 (09) :1763-1780
[24]   A gene-expression signature as a predictor of survival in breast cancer. [J].
van de Vijver, MJ ;
He, YD ;
van 't Veer, LJ ;
Dai, H ;
Hart, AAM ;
Voskuil, DW ;
Schreiber, GJ ;
Peterse, JL ;
Roberts, C ;
Marton, MJ ;
Parrish, M ;
Atsma, D ;
Witteveen, A ;
Glas, A ;
Delahaye, L ;
van der Velde, T ;
Bartelink, H ;
Rodenhuis, S ;
Rutgers, ET ;
Friend, SH ;
Bernards, R .
NEW ENGLAND JOURNAL OF MEDICINE, 2002, 347 (25) :1999-2009
[25]   Multiple-rule bias in the comparison of classification rules [J].
Yousefi, Mohammadmahdi R. ;
Hua, Jianping ;
Dougherty, Edward R. .
BIOINFORMATICS, 2011, 27 (12) :1675-1683
[26]   Exact representation of the second-order moments for resubstitution and leave-one-out error estimation for linear discriminant analysis in the univariate heteroskedastic Gaussian model [J].
Zollanvari, Amin ;
Braga-Neto, Ulisses ;
Dougherty, Edward R. .
PATTERN RECOGNITION, 2012, 45 (02) :908-917
[27]   Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis [J].
Zollanvari, Amin ;
Braga-Neto, Ulisses M. ;
Dougherty, Edward R. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2011, 59 (09) :4238-4255
[28]   Joint Sampling Distribution Between Actual and Estimated Classification Errors for Linear Discriminant Analysis [J].
Zollanvari, Amin ;
Braga-Neto, Ulisses M. ;
Dougherty, Edward R. .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2010, 56 (02) :784-804
[29]   On the sampling distribution of resubstitution and leave-one-out error estimators for linear classifiers [J].
Zollanvari, Amin ;
Braga-Neto, Ulisses M. ;
Dougherty, Edward R. .
PATTERN RECOGNITION, 2009, 42 (11) :2705-2723
[30]  
[No title captured]