Dynamic Selection of Classifiers in Bug Prediction: An Adaptive Method

被引:58
作者
Di Nucci, Dario [1 ]
Palomba, Fabio [3 ]
Oliveto, Rocco [4 ]
De Lucia, Andrea [2 ]
机构
[1] Univ Salerno, I-84084 Fisciano, Italy
[2] Univ Salerno, Dept Comp Sci, Software Engn, I-84084 Fisciano, Italy
[3] Delft Univ Technol, NL-2628 Delft, Netherlands
[4] Univ Molise, Datasounds, I-86100 Campobasso, Italy
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2017年 / 1卷 / 03期
关键词
Bug Prediction; classifier selection; ensemble techniques; MACHINE LEARNING TECHNIQUES; DEFECT PREDICTION; SOFTWARE; FRAMEWORK; METRICS; RBF;
D O I
10.1109/TETCI.2017.2699224
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the last decades, the research community has devoted a lot of effort in the definition of approaches able to predict the defect proneness of source code files. Such approaches exploit several predictors (e.g., product or process metrics) and use machine learning classifiers to predict classes into buggy or not buggy, or provide the likelihood that a class will exhibit a fault in the near future. The empirical evaluation of all these approaches indicated that there is no machine learning classifier providing the best accuracy in any context, highlighting interesting complementarity among them. For these reasons ensemble methods have been proposed to estimate the bug-proneness of a class by combining the predictions of different classifiers. Following this line of research, in this paper we propose an adaptive method, named ASCI (Adaptive Selection of Classifiers in bug prediction), able to dynamically select among a set of machine learning classifiers the one which better predicts the bug-proneness of a class based on its characteristics. An empirical study conducted on 30 software systems indicates that ASCI exhibits higher performances than five different classifiers used independently and combined with the majority voting ensemble method.
引用
收藏
页码:202 / 212
页数:11
相关论文
共 46 条
[1]  
Alpaydin E, 2014, ADAPT COMPUT MACH LE, P1
[2]  
[Anonymous], 2016, The promise repository of empirical software engineering data. north carolina state university, department of computer science
[3]  
Antoniol G., 2008, P C CTR ADV STUD COL
[4]   The limited impact of individual developer data on software defect prediction [J].
Bell, Robert M. ;
Ostrand, Thomas J. ;
Weyuker, Elaine J. .
EMPIRICAL SOFTWARE ENGINEERING, 2013, 18 (03) :478-505
[5]  
Bezerra MER, 2008, INT FED INFO PROC, V276, P119
[6]  
Bezerra MER, 2007, IEEE IJCNN, P2874
[7]  
Bowes D., 2017, SOFTW QUAL J, P1
[8]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[9]   Multi-Objective Cross-Project Defect Prediction [J].
Canfora, Gerardo ;
De Lucia, Andrea ;
Di Penta, Massimiliano ;
Oliveto, Rocco ;
Panichella, Annibale ;
Panichella, Sebastiano .
2013 IEEE SIXTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST 2013), 2013, :252-261
[10]   A METRICS SUITE FOR OBJECT-ORIENTED DESIGN [J].
CHIDAMBER, SR ;
KEMERER, CF .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1994, 20 (06) :476-493