TREE ENSEMBLES WITH RULE STRUCTURED HORSESHOE REGULARIZATION

被引:11
作者
Nalenz, Malte [1 ]
Villani, Mattias [1 ]
机构
[1] Linkoping Univ, Dept Comp & Informat Sci, Div Stat & Machine Learning, SE-58183 Linkoping, Sweden
关键词
Nonlinear regression; classification; decision trees; Bayesian; prediction; MCMC; interpretation; VARIABLE SELECTION;
D O I
10.1214/18-AOAS1157
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We propose a new Bayesian model for flexible nonlinear regression and classification using tree ensembles. The model is based on the RuleFit approach in Friedman and Popescu [Ann. Appl. Stat. 2 (2008) 916-954] where rules from decision trees and linear terms are used in a Ll -regularized regression. We modify RuleFit by replacing the L1-regularization by a horseshoe prior, which is well known to give aggressive shrinkage of noise predictors while leaving the important signal essentially untouched. This is especially important when a large number of rules are used as predictors as many of them only contribute noise. Our horseshoe prior has an additional hierarchical layer that applies more shrinkage a priori to rules with a large number of splits, and to rules that are only satisfied by a few observations. The aggressive noise shrinkage of our prior also makes it possible to complement the rules from boosting in RuleFit with an additional set of trees from Random Forest, which brings a desirable diversity to the ensemble. We sample from the posterior distribution using a very efficient and easily implemented Gibbs sampler. The new model is shown to outperform state-of-the-art methods like RuleFit, BART and Random Forest on 16 datasets. The model and its interpretation is demonstrated on the well known Boston housing data, and on gene expression data for cancer classification. The posterior sampling, prediction and graphical tools for interpreting the model results are implemented in a publicly available R package.
引用
收藏
页码:2379 / 2408
页数:30
相关论文
共 35 条
[1]  
[Anonymous], SUPPLEMENT TREE ENSE
[2]  
[Anonymous], STAT COMPUT
[3]  
[Anonymous], 2003, TECHNICAL REPORT
[4]  
[Anonymous], J STAT COMPUT SIMUL
[5]  
[Anonymous], PREPRINT
[6]  
Breiman L, 1996, MACH LEARN, V24, P49
[7]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[8]  
Carvalho C. M., 2009, International Conference on Artificial Intelligence and Statistics, V5, P73
[9]   The horseshoe estimator for sparse signals [J].
Carvalho, Carlos M. ;
Polson, Nicholas G. ;
Scott, James G. .
BIOMETRIKA, 2010, 97 (02) :465-480
[10]   XGBoost: A Scalable Tree Boosting System [J].
Chen, Tianqi ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :785-794