Assessing Test Case Prioritization on Real Faults and Mutants

被引:34
作者
Luo, Qi [1 ]
Moran, Kevin [1 ]
Poshyvanyk, Denys [1 ]
Di Penta, Massimiliano [2 ]
机构
[1] Coll William & Mary, Dept Comp Sci, Williamsburg, VA 23185 USA
[2] Univ Sannio, Dept Engn, Benevento, Italy
来源
PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE AND EVOLUTION (ICSME) | 2018年
关键词
MUTATION;
D O I
10.1109/ICSME.2018.00033
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Test Case Prioritization (TCP) is an important component of regression testing, allowing for earlier detection of faults or helping to reduce testing time and cost. While several TCP approaches exist in the research literature, a growing number of studies have evaluated them against synthetic software defects, called mutants. Hence, it is currently unclear to what extent TCP performance on mutants would be representative of the performance achieved on real faults. To answer this fundamental question, we conduct the first empirical study comparing the performance of TCP techniques applied to both real-world and mutation faults. The context of our study includes eight well-studied TCP approaches, 35k+ mutation faults, and 357 real-world faults from five Java systems in the Defects4J dataset. Our results indicate that the relative performance of the studied TCP techniques on mutants may not strongly correlate with performance on real faults, depending upon attributes of the subject programs. This suggests that, in certain contexts, the best performing technique on a set of mutants may not be the best technique in practice when applied to real faults. We also illustrate that these correlations vary for mutants generated by different operators depending on whether chosen operators reflect typical faults of a subject program. This highlights the importance, particularly for TCP, of developing mutation operators tailored for specific program domains.
引用
收藏
页码:240 / 251
页数:12
相关论文
共 73 条
[1]   Using mutation analysis for assessing and comparing testing coverage criteria [J].
Andrews, James H. ;
Briand, Lionel C. ;
Labiche, Yvan ;
Namin, Akbar Siami .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2006, 32 (08) :608-624
[2]   Is mutation an appropriate tool for testing experiments? [J].
Andrews, JH ;
Briand, LC ;
Labiche, Y .
ICSE 05: 27TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, PROCEEDINGS, 2005, :402-411
[3]  
[Anonymous], 2014, ACM T SOFTWARE ENG M
[4]   Test Case Prioritization Using Requirements-Based Clustering [J].
Arafeen, Md Junaid ;
Do, Hyunsook .
2013 IEEE SIXTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST 2013), 2013, :312-321
[5]   Latent Dirichlet allocation [J].
Blei, DM ;
Ng, AY ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) :993-1022
[6]   Learning for Test Prioritization: An Industrial Case Study [J].
Busjaeger, Benjamin ;
Xie, Tao .
FSE'16: PROCEEDINGS OF THE 2016 24TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON FOUNDATIONS OF SOFTWARE ENGINEERING, 2016, :975-980
[7]  
C Henard, 2016, P 38 INT C SOFTW ENG
[8]   Test case prioritization: a systematic mapping study [J].
Catal, Cagatay ;
Mishra, Deepti .
SOFTWARE QUALITY JOURNAL, 2013, 21 (03) :445-478
[9]   An Empirical Study on Mutation, Statement and Branch Coverage Fault Revelation that Avoids the Unreliable Clean Program Assumption [J].
Chekam, Thierry Titcheu ;
Papadakis, Mike ;
Le Traon, Yves ;
Harman, Mark .
2017 IEEE/ACM 39TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE), 2017, :597-608
[10]  
Chen T., 2013, THESIS