A computationally fast variable importance test for random forests for high-dimensional data

被引:136
作者
Janitza, Silke [1 ]
Celik, Ender [1 ]
Boulesteix, Anne-Laure [1 ]
机构
[1] Univ Munich, Dept Med Informat Biometry & Epidemiol, Marchioninistr 15, D-81377 Munich, Germany
关键词
Gene selection; Feature selection; Random forests; Variable importance; Variable selection; Variable importance test; REGRESSION TREES; CLASSIFICATION; PREDICTION; TUMOR; CANCER;
D O I
10.1007/s11634-016-0276-4
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Random forests are a commonly used tool for classification and for ranking candidate predictors based on the so-called variable importance measures. These measures attribute scores to the variables reflecting their importance. A drawback of variable importance measures is that there is no natural cutoff that can be used to discriminate between important and non-important variables. Several approaches, for example approaches based on hypothesis testing, were developed for addressing this problem. The existing testing approaches require the repeated computation of random forests. While for low-dimensional settings those approaches might be computationally tractable, for high-dimensional settings typically including thousands of candidate predictors, computing time is enormous. In this article a computationally fast heuristic variable importance test is proposed that is appropriate for high-dimensional data where many variables do not carry any information. The testing approach is based on a modified version of the permutation variable importance, which is inspired by cross-validation procedures. The new approach is tested and compared to the approach of Altmann and colleagues using simulation studies, which are based on real data from high-dimensional binary classification settings. The new approach controls the type I error and has at least comparable power at a substantially smaller computation time in the studies. Thus, it might be used as a computationally fast alternative to existing procedures for high-dimensional data settings where many variables do not carry any information. The new approach is implemented in the R package vita.
引用
收藏
页码:885 / 915
页数:31
相关论文
共 50 条
[31]   MULTICATEGORY VERTEX DISCRIMINANT ANALYSIS FOR HIGH-DIMENSIONAL DATA [J].
Wu, Tong Tong ;
Lange, Kenneth .
ANNALS OF APPLIED STATISTICS, 2010, 4 (04) :1698-1721
[32]   Hybrid fast unsupervised feature selection for high-dimensional data [J].
Manbari, Zhaleh ;
AkhlaghianTab, Fardin ;
Salavati, Chiman .
EXPERT SYSTEMS WITH APPLICATIONS, 2019, 124 :97-118
[33]   The use of random-effect models for high-dimensional variable selection problems [J].
Kwon, Sunghoon ;
Oh, Seungyoung ;
Lee, Youngjo .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2016, 103 :401-412
[35]   CLASSIFICATION OF HIGH-DIMENSIONAL DATA: A RANDOM-MATRIX REGULARIZED DISCRIMINANT ANALYSIS APPROACH [J].
Ye, Bin ;
Liu, Peng .
INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2019, 15 (03) :955-967
[36]   Dimension Reduction Forests: Local Variable Importance Using Structured Random Forests [J].
Loyal, Joshua Daniel ;
Zhu, Ruoqing ;
Cui, Yifan ;
Zhang, Xin .
JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2022, 31 (04) :1104-1113
[37]   TEST OF SIGNIFICANCE FOR HIGH-DIMENSIONAL LONGITUDINAL DATA [J].
Fang, Ethan X. ;
Ning, Yang ;
Li, Runze .
ANNALS OF STATISTICS, 2020, 48 (05) :2622-2645
[38]   Intervention in prediction measure: a new approach to assessing variable importance for random forests [J].
Epifanio, Irene .
BMC BIOINFORMATICS, 2017, 18
[39]   Random subspace ensemble for directly classifying high-dimensional incomplete data [J].
Tran, Cao Truong ;
Nguyen, Binh P. .
EVOLUTIONARY INTELLIGENCE, 2024, 17 (5-6) :3303-3315
[40]   Structured sparsity regularization for analyzing high-dimensional omics data [J].
Vinga, Susana .
BRIEFINGS IN BIOINFORMATICS, 2021, 22 (01) :77-87