Automated Video Game Testing Using Synthetic and Humanlike Agents

被引:31
作者
Ariyurek, Sinan [1 ]
Betin-Can, Aysu [1 ]
Surer, Elif [1 ]
机构
[1] Middle East Tech Univ, Grad Sch Informat, TR-06800 Ankara, Turkey
关键词
Games; Testing; Avatars; Water; Computer bugs; Sprites (computer); Monte Carlo methods; Automated game testing; graph coverage; inverse reinforcement learning (IRL); Monte Carlo tree search (MCTS); reinforcement learning (RL);
D O I
10.1109/TG.2019.2947597
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we present a new methodology that employs tester agents to automate video game testing. We introduce two types of agents-synthetic and humanlike-and two distinct approaches to create them. Our agents are derived from Sarsa and Monte Carlo tree search (MCTS) but focus on finding defects, while traditional game-playing agents focus on maximizing game scores. The synthetic agent uses test goals generated from game scenarios, and these goals are further modified to examine the effects of unintended game transitions. The humanlike agent uses test goals extracted by our proposed multiple greedy-policy inverse reinforcement learning (MGP-IRL) algorithm from tester trajectories. MGP-IRL captures multiple policies executed by human testers. We use our agents to produce test sequences, and run the game with these sequences. At each run, we use an automated test oracle to check for bugs. We analyze the proposed method in two parts-we compare the success of humanlike and synthetic agents in bug finding, and we evaluate the similarity between humanlike agents and human testers. We collected 427 trajectories from human testers using the General Video Game Artificial Intelligence (GVG-AI) framework and created three games with 12 levels that contain 45 bugs. Our experiments reveal that humanlike and synthetic agents compete with human testers' bug finding performances. Moreover, we show that MGP-IRL increases the humanlikeness of agents while improving the bug finding performance.
引用
收藏
页码:50 / 67
页数:18
相关论文
共 47 条
[1]  
Abbeel P., 2004, P INT C MACHINE LEAR
[2]  
Adams E., 2014, Fundamentals of game design
[3]  
Ammann P., 2016, Introduction to software testing
[4]  
[Anonymous], 2019, AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. Online
[5]  
Babes M., 2011, INT C MACHINE LEARNI, P897
[6]   UNDERSTANDING AND CONTROLLING SOFTWARE COSTS [J].
BOEHM, BW ;
PAPACCIO, PN .
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1988, 14 (10) :1462-1477
[7]   A Survey of Monte Carlo Tree Search Methods [J].
Browne, Cameron B. ;
Powley, Edward ;
Whitehouse, Daniel ;
Lucas, Simon M. ;
Cowling, Peter I. ;
Rohlfshagen, Philipp ;
Tavener, Stephen ;
Perez, Diego ;
Samothrakis, Spyridon ;
Colton, Simon .
IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, 2012, 4 (01) :1-43
[8]  
Chang-Sik Cho, 2010, 2010 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), P259, DOI 10.1109/CyberC.2010.54
[9]   Transpositions and Move Groups in Monte Carlo Tree Search [J].
Childs, Benjamin E. ;
Brodeur, James H. ;
Kocsis, Levente .
2008 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND GAMES, 2008, :389-+
[10]  
Devlin S., 2016, COMBINING GAMEPLAY D