On the overestimation of random forest's out-of-bag error

被引:153
作者
Janitza, Silke [1 ]
Hornung, Roman [1 ]
机构
[1] Univ Munich, Inst Med Informat Proc Biometry & Epidemiol, Munich, Germany
来源
PLOS ONE | 2018年 / 13卷 / 08期
关键词
PREDICTION; CLASSIFICATION; TUMOR; DISCOVERY; PATTERNS; IMPACT; CANCER;
D O I
10.1371/journal.pone.0201904
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The ensemble method random forests has become a popular classification tool in bioinformatics and related fields. The out-of-bag error is an error estimation technique often used to evaluate the accuracy of a random forest and to select appropriate values for tuning parameters, such as the number of candidate predictors that are randomly drawn for a split, referred to as mtry. However, for binary classification problems with metric predictors it has been shown that the out-of-bag error can overestimate the true prediction error depending on the choices of random forests parameters. Based on simulated and real data this paper aims to identify settings for which this overestimation is likely. It is, moreover, questionable whether the out-of-bag error can be used in classification tasks for selecting tuning parameters like mtry, because the overestimation is seen to depend on the parameter mtry. The simulation-based and real-data based studies with metric predictor variables performed in this paper show that the overestimation is largest in balanced settings and in settings with few observations, a large number of predictor variables, small correlations between predictors and weak effects. There was hardly any impact of the overestimation on tuning parameter selection. However, although the prediction performance of random forests was not substantially affected when using the out-of-bag error for tuning parameter selection in the present studies, one cannot be sure that this applies to all future data. For settings with metric predictor variables it is therefore strongly recommended to use stratified subsampling with sampling fractions that are proportional to the class sizes for both tuning parameter selection and error estimation in random forests. This yielded less biased estimates of the true prediction error. In unbalanced settings, in which there is a strong interest in predicting observations from the smaller classes well, sampling the same number of observations from each class is a promising alternative.
引用
收藏
页数:31
相关论文
共 46 条
  • [41] Bias in random forest variable importance measures: Illustrations, sources and a solution
    Strobl, Carolin
    Boulesteix, Anne-Laure
    Zeileis, Achim
    Hothorn, Torsten
    [J]. BMC BIOINFORMATICS, 2007, 8 (1)
  • [42] Tan AC., 2003, Appl Bioinformatics, V2, pS75, DOI [DOI 10.1016/J, DOI 10.1186/1471-2105-9-275]
  • [43] Gene expression profiling predicts clinical outcome of breast cancer
    van't Veer, LJ
    Dai, HY
    van de Vijver, MJ
    He, YDD
    Hart, AAM
    Mao, M
    Peterse, HL
    van der Kooy, K
    Marton, MJ
    Witteveen, AT
    Schreiber, GJ
    Kerkhoven, RM
    Roberts, C
    Linsley, PS
    Bernards, R
    Friend, SH
    [J]. NATURE, 2002, 415 (6871) : 530 - 536
  • [44] Bias in error estimation when using cross-validation for model selection
    Varma, S
    Simon, R
    [J]. BMC BIOINFORMATICS, 2006, 7 (1)
  • [45] Witten I. H., 2005, DATA MINING PRACTICA
  • [46] Out-of-Bag Estimation of the Optimal Hyperparameter in SubBag Ensemble Method
    Zhang, Gai-Ying
    Zhang, Chun-Xia
    Zhang, Jiang-She
    [J]. COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2010, 39 (10) : 1877 - 1892