An Empirical Study on Wrapper-based Feature Selection for Software Engineering Data

被引:1
作者
Wang, Huanjing [1 ]
Khoshgoftaar, Taghi M. [2 ]
Napolitano, Amri [2 ]
机构
[1] Western Kentucky Univ, Bowling Green, KY 42101 USA
[2] Florida Atlantic Univ, Boca Raton, FL 33431 USA
来源
2013 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2013), VOL 2 | 2013年
关键词
wrapper-based feature selection; learner; software quality prediction model; CLASSIFICATION; MODELS;
D O I
10.1109/ICMLA.2013.110
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Software metrics give valuable information for understanding and predicting the quality of software modules, and thus it is important to select the right software metrics for building software quality classification models. In this paper we focus on wrapper-based feature (metric) selection techniques, which evaluate the merit of feature subsets based on the performance of classification models. We seek to understand the relationship between the internal learner used inside wrappers and the external learner for building the final classification model. We perform experiments using four consecutive releases of a very large telecommunications system, which include 42 software metrics (and with defect data collected for every program module). Our results demonstrate that (1) the best performance is never found when the internal and external learner match; (2) the best performance is usually found by using NB (Naive Bayes) inside the wrapper unless SVM (Support Vector Machine) is external learner; (3) LR (Logistic Regression) is often the best learner to use for building classification models regardless of which learner was used inside the wrapper.
引用
收藏
页码:84 / 89
页数:6
相关论文
共 24 条
[11]   Emerald: Software metrics and models on the desktop [J].
Hudepohl, JP ;
Aud, SJ ;
Khoshgoftaar, TM ;
Allen, EB ;
Mayrand, J .
IEEE SOFTWARE, 1996, 13 (05) :56-+
[12]  
John G. H., 1995, Uncertainty in Artificial Intelligence. Proceedings of the Eleventh Conference (1995), P338
[13]   Wrappers for feature subset selection [J].
Kohavi, R ;
John, GH .
ARTIFICIAL INTELLIGENCE, 1997, 97 (1-2) :273-324
[14]  
LECESSIE S, 1992, APPL STAT-J ROY ST C, V41, P191
[15]   Benchmarking classification models for software defect prediction: A proposed framework and novel findings [J].
Lessmann, Stefan ;
Baesens, Bart ;
Mues, Christophe ;
Pietsch, Swantje .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2008, 34 (04) :485-496
[16]   Toward integrating feature selection algorithms for classification and clustering [J].
Liu, H ;
Yu, L .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2005, 17 (04) :491-502
[17]  
Liu Huiqing, 2002, Genome Inform, V13, P51
[18]   Data mining static code attributes to learn defect predictors [J].
Menzies, Tim ;
Greenwald, Jeremy ;
Frank, Art .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2007, 33 (01) :2-13
[19]   Detecting fault modules applying feature selection to classifiers [J].
Rodriguez, D. ;
Ruiz, R. ;
Cuadrado-Gallego, J. ;
Aguilar-Ruiz, J. .
IRI 2007: PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION, 2007, :667-+
[20]   The use of decision trees for cost-sensitive classification: an empirical study in software quality prediction [J].
Seliya, Naeem ;
Khoshgoftaar, Taghi M. .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2011, 1 (05) :448-459