Seeding strategies in search-based unit test generation

被引:70
作者
Rojas, Jose Miguel [1 ]
Fraser, Gordon [1 ]
Arcuri, Andrea [2 ,3 ]
机构
[1] Univ Sheffield, Dept Comp Sci, 211 Portobello, Sheffield S1 4DP, S Yorkshire, England
[2] Scienta, Oslo, Norway
[3] Univ Luxembourg, SnT Ctr, Luxembourg, Luxembourg
基金
英国工程与自然科学研究理事会;
关键词
test case generation; search-based testing; testing classes; search-based software engineering; JUnit; !text type='Java']Java[!/text;
D O I
10.1002/stvr.1601
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Search-based techniques have been applied successfully to the task of generating unit tests for object-oriented software. However, as for any meta-heuristic search, the efficiency heavily depends on many factors; seeding, which refers to the use of previous related knowledge to help solve the testing problem at hand, is one such factor that may strongly influence this efficiency. This paper investigates different seeding strategies for unit test generation, in particular seeding of numerical and string constants derived statically and dynamically, seeding of type information and seeding of previously generated tests. To understand the effects of these seeding strategies, the results of a large empirical analysis carried out on a large collection of open-source projects from the SF110 corpus and the Apache Commons repository are reported. These experiments show with strong statistical confidence that, even for a testing tool already able to achieve high coverage, the use of appropriate seeding strategies can further improve performance. (C) 2016 The Authors. Software Testing, Verification and Reliability published by John Wiley & Sons, Ltd.
引用
收藏
页码:366 / 401
页数:36
相关论文
共 29 条
[1]  
Alshahwan N., 2011, 2011 26th IEEE/ACM International Conference on Automated Software Engineering, P3, DOI 10.1109/ASE.2011.6100082
[2]   Search-based software test data generation for string data using program-specific search operators [J].
Alshraideh, Mohammad ;
Bottaci, Leonardo .
SOFTWARE TESTING VERIFICATION & RELIABILITY, 2006, 16 (03) :175-203
[3]  
[Anonymous], 2014, P ACM IEEE INT C AUT, DOI DOI 10.1145/2642937.2642986
[4]  
Arcuri A, 2013, SOFTWARE QUALITY DAY
[5]   A Hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering [J].
Arcuri, Andrea ;
Briand, Lionel .
SOFTWARE TESTING VERIFICATION & RELIABILITY, 2014, 24 (03) :219-250
[6]   Parameter tuning or default values? An empirical investigation in search-based software engineering [J].
Arcuri, Andrea ;
Fraser, Gordon .
EMPIRICAL SOFTWARE ENGINEERING, 2013, 18 (03) :594-623
[7]  
Fraser G., 2012, 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012), P121, DOI 10.1109/ICST.2012.92
[8]  
Fraser Gordon, 2011, Proceedings 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation (ICST 2011), P80, DOI 10.1109/ICST.2011.53
[9]  
Fraser G., 2011, P 19 ACM SIGSOFT S 1, P416
[10]   A Large-Scale Evaluation of Automated Unit Test Generation Using EvoSuite [J].
Fraser, Gordon ;
Arcuri, Andrea .
ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2014, 24 (02)