NaNofuzz: A Usable Tool for Automatic Test Generation

被引:1
作者
Davis, Matthew C. [1 ]
Choi, Sangheon [2 ]
Estep, Sam [1 ]
Myers, Brad A. [1 ]
Sunshine, Joshua [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Rose Hulman Inst Technol, Terre Haute, IN USA
来源
PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023 | 2023年
关键词
Empirical software engineering; user study; software testing; human subjects; experiments; usable testing; automatic test generation;
D O I
10.1145/3611643.3616327
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In the United States alone, software testing labor is estimated to cost $48 billion USD per year. Despite widespread test execution automation and automation in other areas of software engineering, test suites continue to be created manually by software engineers. We have built a test generation tool, called NaNofuzz, that helps users find bugs in their code by suggesting tests where the output is likely indicative of a bug, e.g., that return NaN (not-a-number) values. NaNofuzz is an interactive tool embedded in a development environment to fit into the programmer's workflow. NaNofuzz tests a function with as little as one button press, analyses the program to determine inputs it should evaluate, executes the program on those inputs, and categorizes outputs to prioritize likely bugs. We conducted a randomized controlled trial with 28 professional software engineers using NaNofuzz as the intervention treatment and the popular manual testing tool, Jest, as the control treatment. Participants using NaNofuzz on average identified bugs more accurately (p <.05, by 30%), were more confident in their tests (p <.03, by 20%), and finished their tasks more quickly (p <.007, by 30%).
引用
收藏
页码:1114 / 1126
页数:13
相关论文
共 64 条
[1]   Testing Web Enabled Simulation at Scale Using Metamorphic Testing [J].
Ahlgren, John ;
Berezin, Maria Eugenia ;
Bojarczuk, Kinga ;
Dulskyte, Elena ;
Dvortsova, Inna ;
George, Johann ;
Gucevska, Natalija ;
Harman, Mark ;
Lomeli, Maria ;
Meijer, Erik ;
Sapora, Silvia ;
Spahr-Summers, Justin .
2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE (ICSE-SEIP 2021), 2021, :140-149
[2]   There is no Random Sampling in Software Engineering Research [J].
Amir, Bilal ;
Ralph, Paul .
PROCEEDINGS 2018 IEEE/ACM 40TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING - COMPANION (ICSE-COMPANION, 2018, :344-345
[3]   An orchestrated survey of methodologies for automated software test case generation [J].
Anand, Saswat ;
Burke, Edmund K. ;
Chen, Tsong Yueh ;
Clark, John ;
Cohen, Myra B. ;
Grieskamp, Wolfgang ;
Harman, Mark ;
Harrold, Mary Jean ;
McMinn, Phil ;
Bertolino, Antonia ;
Li, J. Jenny ;
Zhu, Hong .
JOURNAL OF SYSTEMS AND SOFTWARE, 2013, 86 (08) :1978-2001
[4]  
[Anonymous], 2022, Occupational outlook handbook, registered nurses
[5]   An experience report on applying software testing academic results in industry: we need usable automated test generation [J].
Arcuri, Andrea .
EMPIRICAL SOFTWARE ENGINEERING, 2018, 23 (04) :1959-1981
[6]   Sampling in software engineering research: a critical review and guidelines [J].
Baltes, Sebastian ;
Ralph, Paul .
EMPIRICAL SOFTWARE ENGINEERING, 2022, 27 (04)
[7]   Developer Testing in the IDE: Patterns, Beliefs, and Behavior [J].
Beller, Moritz ;
Gousios, Georgios ;
Panichella, Annibale ;
Proksch, Sebastian ;
Amann, Sven ;
Zaidman, Andy .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2019, 45 (03) :261-284
[8]   Using Lightweight Formal Methods to Validate a Key-Value Storage Node in Amazon S3 [J].
Bornholt, James ;
Joshi, Rajeev ;
Astrauskas, Vytautas ;
Cully, Brendan ;
Kragl, Bernhard ;
Markle, Seth ;
Sauri, Kyle ;
Schleit, Drew ;
Slatton, Grant ;
Tasiran, Serdar ;
Van Geffen, Jacob ;
Warfield, Andrew .
PROCEEDINGS OF THE 28TH ACM SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES, SOSP 2021, 2021, :836-850
[9]  
Braun V., 2006, QUAL RES PSYCHOL, V3, P77, DOI [10.1191/1478088706qp063oa, DOI 10.1080/14780887.2020.1769238, DOI 10.1191/1478088706QP063OA]
[10]  
Brooke J., 1996, Usability Eval. Ind., V189, P189