Safe RuleFit: Learning Optimal Sparse Rule Model by Meta Safe Screening

被引:4
作者
Kato, Hiroki [1 ]
Hanada, Hiroyuki [2 ]
Takeuchi, Ichiro [2 ,3 ]
机构
[1] Nagoya Inst Technol, Dept Comp Sci, Nagoya, Aichi 4668555, Japan
[2] RIKEN, Ctr Adv Intelligence Project, Chuo, Tokyo 1030027, Japan
[3] Nagoya Univ, Grad Sch Engn, Nagoya, Aichi 4648601, Japan
关键词
Predictive models; Random forests; Dictionaries; Analytical models; Regression tree analysis; Pattern analysis; Numerical models; Machine learning; knowledge representation formalisms and methods; convex programming; combinatorial algorithms; REGRESSION;
D O I
10.1109/TPAMI.2022.3167993
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of learning a sparse rule model, a prediction model in the form of a sparse linear combination of rules, where a rule is an indicator function defined over a hyper-rectangle in the input space. Since the number of all possible such rules is extremely large, it has been computationally intractable to select the optimal set of active rules. In this paper, to solve this difficulty for learning the optimal sparse rule model, we propose Safe RuleFit (SRF). Our basic idea is to develop meta safe screening (mSS), which is a non-trivial extension of well-known safe screening (SS) techniques. While SS is used for screening out one feature, mSS can be used for screening out multiple features by exploiting the inclusion-relations of hyper-rectangles in the input space. SRF provides a general framework for fitting sparse rule models for regression and classification, and it can be extended to handle more general sparse regularizations such as group regularization. We demonstrate the advantages of SRF through intensive numerical experiments.
引用
收藏
页码:2330 / 2343
页数:14
相关论文
共 35 条
  • [1] Aggarwal Charu C., 2014, FREQUENT PATTERN MIN, DOI DOI 10.1007/978-3-319-07821-2
  • [2] Aho T, 2012, J MACH LEARN RES, V13, P2367
  • [3] Optimization with Sparsity-Inducing Penalties
    Bach, Francis
    Jenatton, Rodolphe
    Mairal, Julien
    Obozinski, Guillaume
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2012, 4 (01): : 1 - 106
  • [4] Biran O, 2017, IJCAI 17 WORKSH EXPL, P8
  • [5] Bonnefoy A, 2014, EUR SIGNAL PR CONF, P6
  • [6] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [7] Burdakov O., 1988, P INT WISS KOLL FORT, P15
  • [8] Burdakov O., 2001, NEW NORM DATA FITTIN
  • [9] Machine Learning Interpretability: A Survey on Methods and Metrics
    Carvalho, Diogo, V
    Pereira, Eduardo M.
    Cardoso, Jaime S.
    [J]. ELECTRONICS, 2019, 8 (08)
  • [10] Dembczynski K, 2008, LECT NOTES ARTIF INT, V5097, P533, DOI 10.1007/978-3-540-69731-2_52