Improving Random GUI Testing with Image-Based Widget Detection

被引:57
作者
White, Thomas D. [1 ]
Fraser, Gordon [2 ]
Brown, Guy J. [1 ]
机构
[1] Univ Sheffield, Dept Comp Sci, Sheffield, S Yorkshire, England
[2] Univ Passau, Chair Software Engn 2, Passau, Germany
来源
PROCEEDINGS OF THE 28TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS (ISSTA '19) | 2019年
基金
英国工程与自然科学研究理事会;
关键词
GUI testing; object detection; random testing; black box testing; software engineering; data generation; neural networks;
D O I
10.1145/3293882.3330551
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Graphical User Interfaces (GUIs) are amongst the most common user interfaces, enabling interactions with applications through mouse movements and key presses. Tools for automated testing of programs through their GUI exist, however they usually rely on operating system or framework specific knowledge to interact with an application. Due to frequent operating system updates, which can remove required information, and a large variety of different GUI frameworks using unique underlying data structures, such tools rapidly become obsolete, Consequently, for an automated GUI test generation tool, supporting many frameworks and operating systems is impractical. We propose a technique for improving GUI testing by automatically identifying GUI widgets in screen shots using machine learning techniques. As training data, we generate randomized GUIs to automatically extract widget information. The resulting model provides guidance to GUI testing tools in environments not currently supported by deriving GUI widget information from screen shots only. In our experiments, we found that identifying GUI widgets in screen shots and using this information to guide random testing achieved a significantly higher branch coverage in 18 of 20 applications, with an average increase of 42.5% when compared to conventional random testing.
引用
收藏
页码:307 / 317
页数:11
相关论文
共 24 条
[1]  
Android Developers, 2015, MONKEYRUNNER
[2]   Web Canvas Testing through Visual Inference [J].
Bajammal, Mohammad ;
Mesbah, Ali .
2018 IEEE 11TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST), 2018, :193-203
[3]  
Banerji S, 2012, INT CONF IMAG PROC, P330, DOI 10.1109/IPTA.2012.6469564
[4]  
Becce G, 2012, LECT NOTES COMPUT SC, V7212, P347, DOI 10.1007/978-3-642-28872-2_24
[5]   Guiding App Testing with Mined Interaction Models [J].
Borges, Nataniel P., Jr. ;
Gomez, Maria ;
Zeller, Andreas .
2018 IEEE/ACM 5TH INTERNATIONAL CONFERENCE ON MOBILE SOFTWARE ENGINEERING AND SYSTEMS (MOBILESOFT), 2018, :133-143
[6]  
Chahim Hatim, 2018, CHALLENGING TESTAR I
[7]   Automated Test Input Generation for Android: Are We There Yet? [J].
Choudhary, Shauvik Roy ;
Gorla, Alessandra ;
Orso, Alessandro .
2015 30TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE), 2015, :429-440
[8]   Convolutional Neural Network With Data Augmentation for SAR Target Recognition [J].
Ding, Jun ;
Chen, Bo ;
Liu, Hongwei ;
Huang, Mengyuan .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2016, 13 (03) :364-368
[9]  
Forrester JE, 2000, USENIX ASSOCIATION PROCEEDINGS OF THE 4TH UNSENIX WINDOWS SYSTEMS SYMPOSIUM, P59
[10]   A Large-Scale Evaluation of Automated Unit Test Generation Using EvoSuite [J].
Fraser, Gordon ;
Arcuri, Andrea .
ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2014, 24 (02)