FORESTEXTER: An efficient random forest algorithm for imbalanced text categorization

被引:98
作者
Wu, Qingyao [1 ]
Ye, Yunming [1 ]
Zhang, Haijun [1 ]
Ng, Michael K. [2 ]
Ho, Shen-Shyang [3 ]
机构
[1] Harbin Inst Technol, Shenzhen Grad Sch, Shenzhen Key Lab Internet Informat Collaborat, Shenzhen 518055, Peoples R China
[2] Hong Kong Baptist Univ, Dept Math, Kowloon, Hong Kong, Peoples R China
[3] Nanyang Technol Univ, Sch Comp Engn, Singapore 639798, Singapore
关键词
Text categorization; Imbalanced classification; Random forests; SVM; Stratified sampling; SUPPORT VECTOR MACHINES; DECISION TREES; CLASSIFICATION;
D O I
10.1016/j.knosys.2014.06.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a new random forest (RF) based ensemble method, FORESTMER, to solve the imbalanced text categorization problems. RF has shown great success in many real-world applications. However, the problem of learning from text data with class imbalance is a relatively new challenge that needs to be addressed. A RF algorithm tends to use a simple random sampling of features in building their decision trees. As a result, it selects many subspaces that contain few, if any, informative features for the minority class. Furthermore, the Gini measure for data splitting is considered to be skew sensitive and bias towards the majority class. Due to the inherent complex characteristics of imbalanced text datasets, learning RF from such data requires new approaches to overcome challenges related to feature subspace selection and cut-point choice while performing node splitting. To this end, we propose a new tree induction method that selects splits, both feature subspace selection and splitting criterion, for RF on imbalanced text data. The key idea is to stratify features into two groups and to generate effective term weighting for the features. One group contains positive features for the minority class and the other one contains the negative features for the majority class. Then, for feature subspace selection, we effectively select features from each group based on the term weights. The advantage of our approach is that each subspace contains adequate informative features for both minority and majority classes. One difference between our proposed tree induction method and the classical RF method is that our method uses Support Vector Machines (SVM) classifier to split the training data into smaller and more balance subsets at each tree node, and then successively retrains the SVM classifiers on the data partitions to refine the model while moving down the tree. In this way, we force the classifiers to learn from refined feature subspaces and data subsets to fit the imbalanced data better. Hence, the tree model becomes more robust for text categorization task with imbalanced dataset. Experimental results on various benchmark imbalanced text datasets (Reuters-21578, Ohsumed, and imbalanced 20 newsgroup) consistently demonstrate the effectiveness of our proposed FORESTEXTER method. The performance of our proposed approach is competitive against the standard random forest and different variants of SVM algorithms. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:105 / 116
页数:12
相关论文
共 61 条
[1]   Applying support vector machines to imbalanced datasets [J].
Akbani, R ;
Kwek, S ;
Japkowicz, N .
MACHINE LEARNING: ECML 2004, PROCEEDINGS, 2004, 3201 :39-50
[2]  
[Anonymous], 2004, Mach. Learn.
[3]  
[Anonymous], 2002, Learning to Classify Text Using Support Vector Machines: Methods, Theory and Algorithms
[4]  
[Anonymous], 1997, ICML
[5]   Empirical characterization of random forest variable importance measures [J].
Archer, Kelfie J. ;
Kirnes, Ryan V. .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2008, 52 (04) :2249-2260
[6]  
Batista G. E., 2004, ACM SIGKDD Explor. Newslett., P20, DOI [10.1145/1007730.1007735, DOI 10.1145/1007730.1007735]
[7]  
Biau G, 2012, J MACH LEARN RES, V13, P1063
[8]  
Biau G, 2008, J MACH LEARN RES, V9, P2015
[9]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[10]   A tutorial on Support Vector Machines for pattern recognition [J].
Burges, CJC .
DATA MINING AND KNOWLEDGE DISCOVERY, 1998, 2 (02) :121-167