Evaluating Non-adequate Test-Case Reduction

被引:16
作者
Alipour, Mohammad Amin [1 ]
Shi, August [2 ]
Gopinath, Rahul [1 ]
Marinov, Darko [2 ]
Grocer, Alex [1 ]
机构
[1] Oregon State Univ, Sch Elect Engn & Comp Sci, Corvallis, OR 97331 USA
[2] Univ Illinois, Dept Comp Sci, Champaign, IL USA
来源
2016 31ST IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE) | 2016年
基金
美国国家科学基金会;
关键词
test reduction; test adequacy; coverage; mutation testing;
D O I
10.1145/2970276.2970361
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Given two test cases, one larger and one smaller, the smaller test case is preferred for many purposes. A smaller test case usually runs faster, is easier to understand, and is more convenient for debugging. However, smaller test cases also tend to cover less code and detect fewer faults than larger test cases. Whereas traditional research focused on reducing test suites while preserving code coverage, recent work has introduced the idea of reducing individual test cases, rather than test suites, while still preserving code coverage. Other recent work has proposed non-adequately reducing test suites by not even preserving all the code coverage. This paper empirically evaluates a new combination of these two ideas, non-adequate reduction of test cases, which allows for a wide range of trade-offs between test case size and fault detection. Our study introduces and evaluates C%-coverage reduction (where a test case is reduced to retain at least C% of its original coverage) and N-mutant reduction (where a test case is reduced to kill at least N of the mutants it originally killed). We evaluate the reduction trade-offs with varying values of C% and N for four real-world C projects: Mozilla's SpiderMonkey JavaScript engine, the YAFFS2 flash file system, Grep, and Gzip. The results show that it is possible to greatly reduce the size of many test cases while still preserving much of their fault-detection capability.
引用
收藏
页码:16 / 26
页数:11
相关论文
共 27 条
[1]   Establishing Theoretical Minimal Sets of Mutants [J].
Ammann, Paul ;
Delamaro, Marcio E. ;
Offutt, Jeff .
2014 IEEE SEVENTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST), 2014, :21-30
[2]  
Andrews James H., 2008, 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, P19, DOI 10.1109/ASE.2008.12
[3]   Is mutation an appropriate tool for testing experiments? [J].
Andrews, JH ;
Briand, LC ;
Labiche, Y .
ICSE 05: 27TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, PROCEEDINGS, 2005, :402-411
[4]  
[Anonymous], 2016, P 25 INT S SOFTW TES
[5]  
[Anonymous], 2014, ICSE
[6]  
[Anonymous], 2013, P 2013 INT S SOFTW T
[7]   Taming Compiler Fuzzers [J].
Chen, Yang ;
Groce, Alex ;
Zhang, Chaoqiang ;
Wong, Weng-Keen ;
Fern, Xiaoli ;
Eide, Eric ;
Regehr, John .
ACM SIGPLAN NOTICES, 2013, 48 (06) :197-207
[8]   QuickCheck: A lightweight tool for random testing of Haskell programs [J].
Claessen, K ;
Hughes, J .
ACM SIGPLAN NOTICES, 2000, 35 (09) :268-279
[9]   Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact [J].
Do, HS ;
Elbaum, S ;
Rothermel, G .
EMPIRICAL SOFTWARE ENGINEERING, 2005, 10 (04) :405-435
[10]  
Gligoric Milos, 2015, P 2015 INT S SOFTW T, P211, DOI [10.1145/2771783.2771784, DOI 10.1145/2771783.2771784]