Elitist and ensemble strategies for cascade generalization

被引:1
作者
Zhao, Huimin [1 ]
Sinha, Atish P.
Ram, Sudha
机构
[1] Univ Wisconsin, Milwaukee, WI 53201 USA
[2] Univ Arizona, Tucson, AZ 85721 USA
关键词
cascade generalization; data mining; decision tree; elitist strategy; ensemble method; voting method;
D O I
10.4018/jdm.2006070105
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Several methods have been proposed for cascading other classification algorithms with decision tree learners to alleviate the representational bias of decision trees and, potentially, to improve classification accuracy. Such cascade generalization of decision trees increases the flexibility of the decision boundaries between classes and promotes better fitting of the training data. However more flexible models may not necessarily lead to more predictive power. Because of polential overfitting problems, the true classification accuracy on test data may not increase. Recently, a generic method for cascade generalization has been proposed. The method uses a parameter-the maximum cascading depth-to constrain the degree that other classification algorithms are cascaded with decision tree learners. A method for efficiently learning a collection (i.e., a forest) of generalized decision trees, each with other classification algorithms cascaded to a particular depth, also has been developed In this article, we propose several new strategies, including elitist and ensemble (weighted or unweighted), for using the various decision trees in such a collection in the prediction phase. Our empirical evaluation using 32 data sets in the UCI machine learning repository shows that, on average, the elitist strategy outperforms the weighted full ensemble strategy, which, in turn, outperforms the unweighted full ensemble strategy. However no strategy is universally superior across all applications. Since the same training process can be used to evaluate the various strategies, we recommend that several promising strategies be evaluated and compared before selecting the one to use for a given application.
引用
收藏
页码:92 / 107
页数:16
相关论文
共 37 条
[1]  
[Anonymous], J MANAGEMENT INFORM
[2]   An empirical comparison of voting classification algorithms: Bagging, boosting, and variants [J].
Bauer, E ;
Kohavi, R .
MACHINE LEARNING, 1999, 36 (1-2) :105-139
[3]  
Bioch JC, 1997, LECT NOTES ARTIF INT, V1263, P232
[4]  
Blake C.L., 1998, UCI repository of machine learning databases
[5]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[6]  
BRODLEY CE, 1995, MACH LEARN, V19, P45, DOI 10.1007/BF00994660
[7]  
Choong Nyoung Kim, 1999, Journal of Management Information Systems, V16, P189
[8]  
Cohen J., 1988, STAT POWER ANAL BEHA
[9]   An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization [J].
Dietterich, TG .
MACHINE LEARNING, 2000, 40 (02) :139-157
[10]   Credit risk assessment using a multicriteria hierarchical discrimination approach: A comparative analysis [J].
Doumpos, M ;
Kosmidou, K ;
Baourakis, G ;
Zopounidis, C .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2002, 138 (02) :392-412