Diversity-driven unit test generation

被引:5
作者
Kessel, Marcus [1 ]
Atkinson, Colin [1 ]
机构
[1] Univ Mannheim, D-68159 Mannheim, Germany
关键词
Diversity; Test generation; Test amplification; Automation; Behavior; Experiment; Evaluation; Test quality; SOFTWARE; SEARCH; REUSE;
D O I
10.1016/j.jss.2022.111442
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The goal of automated unit test generation tools is to create a set of test cases for the software under test that achieve the highest possible coverage for the selected test quality criteria. The most effective approaches for achieving this goal at the present time use meta-heuristic optimization algorithms to search for new test cases using fitness functions defined on existing sets of test cases and the system under test. Regardless of how their search algorithms are controlled, however, all existing approaches focus on the analysis of exactly one implementation, the software under test, to drive their search processes, which is a limitation on the information they have available. In this paper we investigate whether the practical effectiveness of white box unit test generation tools can be increased by giving them access to multiple, diverse implementations of the functionality under test harvested from widely available Open Source software repositories. After presenting a basic implementation of such an approach, DivGen (Diversity-driven Generation), on top of the leading test generation tool for Java (EvoSuite), we assess the performance of DivGen compared to EvoSuite when applied in its traditional, mono-implementation oriented mode (MonoGen). The results show that while DivGen outperforms MonoGen in 33% of the sampled classes for mutation coverage (+16% higher on average), MonoGen outperforms DivGen in 12.4% of the classes for branch coverage (+10% higher average). (c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页数:22
相关论文
共 70 条
[1]   On code reuse from StackOverflow: An exploratory study on Android apps [J].
Abdalkareem, Rabe ;
Shihab, Emad ;
Rilling, Juergen .
INFORMATION AND SOFTWARE TECHNOLOGY, 2017, 88 :148-158
[2]  
Agitar, 2021, AGITARONE
[3]   Diversity in Search-Based Unit Test Suite Generation [J].
Albunian, Nasser M. .
SEARCH BASED SOFTWARE ENGINEERING, SSBSE 2017, 2017, 10452 :183-189
[4]   An orchestrated survey of methodologies for automated software test case generation [J].
Anand, Saswat ;
Burke, Edmund K. ;
Chen, Tsong Yueh ;
Clark, John ;
Cohen, Myra B. ;
Grieskamp, Wolfgang ;
Harman, Mark ;
Harrold, Mary Jean ;
McMinn, Phil ;
Bertolino, Antonia ;
Li, J. Jenny ;
Zhu, Hong .
JOURNAL OF SYSTEMS AND SOFTWARE, 2013, 86 (08) :1978-2001
[5]   Using mutation analysis for assessing and comparing testing coverage criteria [J].
Andrews, James H. ;
Briand, Lionel C. ;
Labiche, Yvan ;
Namin, Akbar Siami .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2006, 32 (08) :608-624
[6]   A Hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering [J].
Arcuri, Andrea ;
Briand, Lionel .
SOFTWARE TESTING VERIFICATION & RELIABILITY, 2014, 24 (03) :219-250
[7]  
Arcuri A, 2011, LECT NOTES COMPUT SC, V6956, P33, DOI 10.1007/978-3-642-23716-4_6
[8]   A Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering [J].
Arcuri, Andrea ;
Briand, Lionel .
2011 33RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE), 2011, :1-10
[9]  
Atkinson Colin, 2013, Search Based Software Engineering. 5th International Symposium, SSBSE 2013. Proceedings: LNCS 8084, P239, DOI 10.1007/978-3-642-39742-4_18
[10]   THE N-VERSION APPROACH TO FAULT-TOLERANT SOFTWARE [J].
AVIZIENIS, A .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1985, 11 (12) :1491-1501