Using Class Imbalance Learning for Software Defect Prediction

被引:388
作者
Wang, Shuo [1 ]
Yao, Xin [1 ]
机构
[1] Univ Birmingham, Sch Comp Sci, CERCIA, Birmingham B15 2TT, W Midlands, England
基金
英国工程与自然科学研究理事会;
关键词
Class imbalance learning; ensemble learning; negative correlation learning; software defect prediction; STATIC CODE ATTRIBUTES; NEURAL-NETWORKS; MACHINE;
D O I
10.1109/TR.2013.2259203
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
To facilitate software testing, and save testing costs, a wide range of machine learning methods have been studied to predict defects in software modules. Unfortunately, the imbalanced nature of this type of data increases the learning difficulty of such a task. Class imbalance learning specializes in tackling classification problems with imbalanced distributions, which could be helpful for defect prediction, but has not been investigated in depth so far. In this paper, we study the issue of if and how class imbalance learning methods can benefit software defect prediction with the aim of finding better solutions. We investigate different types of class imbalance learning methods, including resampling techniques, threshold moving, and ensemble algorithms. Among those methods we studied, AdaBoost.NC shows the best overall performance in terms of the measures including balance, G-mean, and Area Under the Curve (AUC). To further improve the performance of the algorithm, and facilitate its use in software defect prediction, we propose a dynamic version of AdaBoost. NC, which adjusts its parameter automatically during training. Without the need to pre-define any parameters, it is shown to be more effective and efficient than the original AdaBoost. NC.
引用
收藏
页码:434 / 443
页数:10
相关论文
共 53 条
[1]  
[Anonymous], 1993, C4 5 PROGRAMS MACH L
[2]  
[Anonymous], 2011, ENSEMBLE DIVERSITY C
[3]  
[Anonymous], 2008, Proceedings of the 4th international workshop on Predictor models in software engineering
[4]   A systematic and comprehensive investigation of methods to build and evaluate fault prediction models [J].
Arisholm, Erik ;
Briand, Lionel C. ;
Johannessen, Eivind B. .
JOURNAL OF SYSTEMS AND SOFTWARE, 2010, 83 (01) :2-17
[5]  
Boetticher G., 2007, The PROMISE Repository of Empirical Software Engineering Data
[6]   The use of the area under the roc curve in the evaluation of machine learning algorithms [J].
Bradley, AP .
PATTERN RECOGNITION, 1997, 30 (07) :1145-1159
[7]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[8]  
Brown G, 2005, J MACH LEARN RES, V6, P1621
[9]   Software fault prediction: A literature review and current trends [J].
Catal, Cagatay .
EXPERT SYSTEMS WITH APPLICATIONS, 2011, 38 (04) :4626-4636
[10]   Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem [J].
Catal, Cagatay ;
Diri, Banu .
INFORMATION SCIENCES, 2009, 179 (08) :1040-1058