Evaluating software testing techniques: A systematic mapping study

被引:2
作者
Mayeda, Mitchell [1 ]
Andrews, Anneliese [2 ]
机构
[1] Univ Denver, Denver, CO 80210 USA
[2] Univ Denver, Comp Sci, Denver, CO USA
来源
ADVANCES IN COMPUTERS, VOL 123 | 2021年 / 123卷
关键词
AUTOMATED TEST-GENERATION; WEB APPLICATIONS; COMBINATORIAL; COVERAGE; PARALLEL; MODEL; ATOMICITY; ALGORITHM; STRATEGY; PROGRAMS;
D O I
10.1016/bs.adcom.2021.01.002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Software testing techniques are crucial for detecting faults in software and reducing the risk of using it. As such, it is important that we have a good understanding of how to evaluate these techniques for their efficiency, scalability, applicability, and effectiveness at finding faults. This article enhances our understanding of software testing technique evaluations by providing an overview of the state of the art in research and structuring the field to assist researchers in locating types of evaluations they are interested in. To do so a systematic mapping study is performed. Three hundred and sixty-five primary studies are systematically collected from the field and each mapped into categories based on numerous classification schemes. This reveals the distribution of research by each category and identifies where there are research gaps. It also results in a mapping from each combination of categories to actual papers belonging to them; allowing researchers to very quickly locate all of the testing technique evaluation research with properties they are interested in. Further classifications are performed on case study and experiment evaluations in order to assess the relative quality of these evaluations. The distribution of research by various category combinations is presented along with a large table mapping each category combination to the papers belonging to them. We find a majority of evaluations are empirical evaluations in the form of case studies and experiments, most of them are of low quality based on proper methodology guidelines, and relatively few papers in the field discuss how testing techniques should be evaluated.
引用
收藏
页码:41 / 114
页数:74
相关论文
共 339 条
[91]   Using Evolutionary Computation to Improve Mutation Testing [J].
Delgado-Perez, Pedro ;
Medina-Bulol, Inmaculada ;
Merayo, Mercedes G. .
ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2017, PT II, 2017, 10306 :381-391
[92]  
Deng M, 2009, 2009 JOINT C PERV CO
[93]  
Devroey X., 2016, P 38 INT C SOFTW ENG
[94]  
Devroey X., 2014, P 18 INT SOFTW PROD, V2
[95]   A Variability Perspective of Mutation Analysis [J].
Devroey, Xavier ;
Perrouin, Gilles ;
Cordy, Maxime ;
Papadakis, Mike ;
Legay, Axel ;
Schobbens, Pierre-Yves .
22ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (FSE 2014), 2014, :841-844
[96]   Evaluating Test Suite Effectiveness and Assessing Student Code via Constraint Logic Programming [J].
Dewey, Kyle ;
Conrad, Phillip ;
Craig, Michelle ;
Morozova, Elena .
ITICSE'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INNOVATION AND TECHNOLOGY IN COMPUTER SCIENCE EDUCATION, 2017, :317-322
[97]   FormTester: Effective Integration of Model-Based and Manually Specified Test Cases [J].
Dixit, Rahul ;
Lutteroth, Christof ;
Weber, Gerald .
2015 IEEE/ACM 37TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, VOL 2, 2015, :745-748
[98]  
Do T.B.N., 2013, P 4 S INF COMM TECHN
[99]   A random testing approach using pushdown automata [J].
Dreyfus, Alois ;
Heam, Pierre-Cyrille ;
Kouchnarenko, Olga ;
Masson, Catherine .
SOFTWARE TESTING VERIFICATION & RELIABILITY, 2014, 24 (08) :656-683
[100]  
El Youmi M, 2014, INT CONF MULTIMED, P547, DOI 10.1109/ICMCS.2014.6911145