Tree induction for probability-based ranking

被引:311
作者
Provost, F [1 ]
Domingos, P
机构
[1] NYU, New York, NY 10012 USA
[2] Univ Washington, Seattle, WA 98195 USA
基金
美国国家科学基金会;
关键词
ranking; probability estimation; classification; cost-sensitive learning; decision trees; Laplace correction; bagging;
D O I
10.1023/A:1024099825458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tree induction is one of the most effective and widely used methods for building classification models. However, many applications require cases to be ranked by the probability of class membership. Probability estimation trees (PETs) have the same attractive features as classification trees ( e. g., comprehensibility, accuracy and efficiency in high dimensions and on large data sets). Unfortunately, decision trees have been found to provide poor probability estimates. Several techniques have been proposed to build more accurate PETs, but, to our knowledge, there has not been a systematic experimental analysis of which techniques actually improve the probability-based rankings, and by how much. In this paper we first discuss why the decision-tree representation is not intrinsically inadequate for probability estimation. Inaccurate probabilities are partially the result of decision-tree induction algorithms that focus on maximizing classification accuracy and minimizing tree size ( for example via reduced-error pruning). Larger trees can be better for probability estimation, even if the extra size is superfluous for accuracy maximization. We then present the results of a comprehensive set of experiments, testing some straightforward methods for improving probability-based rankings. We show that using a simple, common smoothing method - the Laplace correction - uniformly improves probability-based rankings. In addition, bagging substantially improves the rankings, and is even more effective for this purpose than for improving accuracy. We conclude that PETs, with these simple modifications, should be considered when rankings based on class-membership probability are required.
引用
收藏
页码:199 / 215
页数:17
相关论文
共 48 条
  • [1] [Anonymous], P 11 INT JOINT C ART
  • [2] [Anonymous], P 10 EUR C MACH LEAR
  • [3] [Anonymous], P 12 INT C MACH LEAR
  • [4] [Anonymous], 2000, P INT C MACHINE LEAR
  • [5] [Anonymous], PROGR MACHINE LEARNI
  • [6] Probabilistic estimation-based data mining for discovering insurance risks
    Apte, C
    Grossman, E
    Pednault, EPD
    Rosen, BK
    Tipu, FA
    White, B
    [J]. IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 1999, 14 (06): : 49 - 58
  • [7] A TREE-BASED STATISTICAL LANGUAGE MODEL FOR NATURAL-LANGUAGE SPEECH RECOGNITION
    BAHL, LR
    BROWN, PF
    DESOUZA, PV
    MERCER, RL
    [J]. IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1989, 37 (07): : 1001 - 1008
  • [8] An empirical comparison of voting classification algorithms: Bagging, boosting, and variants
    Bauer, E
    Kohavi, R
    [J]. MACHINE LEARNING, 1999, 36 (1-2) : 105 - 139
  • [9] BENNETT P, 2002, CMUCS02126 SCH COMP
  • [10] Blake C., 2000, UCI REPOSITORY MACHI