On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation

被引:0
作者
Cawley, Gavin C. [1 ]
Talbot, Nicola L. C. [1 ]
机构
[1] Univ E Anglia, Sch Comp Sci, Norwich NR4 7TJ, Norfolk, England
基金
英国工程与自然科学研究理事会;
关键词
model selection; performance evaluation; bias-variance trade-off; selection bias; over-fitting; SUPPORT VECTOR MACHINE; OUT CROSS-VALIDATION; CLASSIFICATION; REGULARIZATION; COEFFICIENTS; PARAMETERS; STABILITY; VARIANCE; NETWORKS; BOUNDS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Model selection strategies for machine learning algorithms typically involve the numerical optimisation of an appropriate model selection criterion, often based on an estimator of generalisation performance, such as k-fold cross-validation. The error of such an estimator can be broken down into bias and variance components. While unbiasedness is often cited as a beneficial quality of a model selection criterion, we demonstrate that a low variance is at least as important, as a non-negligible variance introduces the potential for over-fitting in model selection as well as in training the model. While this observation is in hindsight perhaps rather obvious, the degradation in performance due to over-fitting the model selection criterion can be surprisingly large, an observation that appears to have received little attention in the machine learning literature to date. In this paper, we show that the effects of this form of over-fitting are often of comparable magnitude to differences in performance between learning algorithms, and thus cannot be ignored in empirical evaluation. Furthermore, we show that some common performance evaluation practices are susceptible to a form of selection bias as a result of this form of over-fitting and hence are unreliable. We discuss methods to avoid over-fitting in model selection and subsequent selection bias in performance evaluation, which we hope will be incorporated into best practice. While this study concentrates on cross-validation based model selection, the findings are quite general and apply to any model selection practice involving the optimisation of a model selection criterion evaluated over a finite sample of data, including maximisation of the Bayesian evidence and optimisation of performance bounds.
引用
收藏
页码:2079 / 2107
页数:29
相关论文
共 57 条
[1]   RELATIONSHIP BETWEEN VARIABLE SELECTION AND DATA AUGMENTATION AND A METHOD FOR PREDICTION [J].
ALLEN, DM .
TECHNOMETRICS, 1974, 16 (01) :125-127
[2]   Selection bias in gene extraction on the basis of microarray gene-expression data [J].
Ambroise, C ;
McLachlan, GJ .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2002, 99 (10) :6562-6566
[3]   Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression [J].
An, Senjian ;
Liu, Wanquan ;
Venkatesh, Svetha .
PATTERN RECOGNITION, 2007, 40 (08) :2154-2162
[4]   Kernel least-squares models using updates of the pseudoinverse [J].
Andelic, E. ;
Schaffoener, M. ;
Katz, M. ;
Krueger, S. E. ;
Wendemuth, A. .
NEURAL COMPUTATION, 2006, 18 (12) :2928-2935
[5]  
[Anonymous], J MACHINE LEARNING R
[6]  
[Anonymous], 1969, Technicheskaya Kibernetica
[7]  
[Anonymous], [No title captured]
[8]  
[Anonymous], 2001, Pattern Classification
[9]  
[Anonymous], P 21INT C MACH LEARN
[10]  
Bengio Y, 2004, J MACH LEARN RES, V5, P1089