Playtesting in Match 3 Game Using Strategic Plays via Reinforcement Learning

被引:13
作者
Shin, Yuchul [1 ]
Kim, Jaewon [1 ]
Jin, Kyohoon [1 ]
Kim, Young Bin [1 ]
机构
[1] Chung Ang Univ, Dept Image Sci & Art, Dongjak 06974, South Korea
来源
IEEE ACCESS | 2020年 / 8卷 / 08期
基金
新加坡国家研究基金会;
关键词
Games; Learning (artificial intelligence); Color; Licenses; Automation; Monte Carlo methods; Convolutional neural networks; Actor-critic; agent; artificial intelligence; game mission; game strategy; match; 3; playtesting; reinforcement learning; NEURAL-NETWORKS; DEEP; GO;
D O I
10.1109/ACCESS.2020.2980380
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Playtesting is a lifecycle phase in game development wherein the completeness and smooth progress of planned content are verified before release of a new game. Although studies on playtesting in Match 3 games have attempted to utilize Monte Carlo tree search (MCTS) and convolutional neural networks (CNNs), the applicability of these methods are limited because the associated training is time-consuming and data collection is difficult. To address this problem, game playtesting was performed via learning based on strategic play in Match 3 games. Five strategic plays were defined in the Match 3 game under consideration and game playtesting was performed for each situation via reinforcement learning. The proposed agent performed within a 5 & x0025; margin of human performance on the most complex mission in the experiment. We demonstrate that it is possible for the level designer to measure the difficulty of the level via playtesting various missions. This study also provides level testing standards for several types of missions in Match 3 games.
引用
收藏
页码:51593 / 51600
页数:8
相关论文
共 32 条
  • [1] Solving the Rubik's cube with deep reinforcement learning and search
    Agostinelli, Forest
    McAleer, Stephen
    Shmakov, Alexander
    Baldi, Pierre
    [J]. NATURE MACHINE INTELLIGENCE, 2019, 1 (08) : 356 - 363
  • [2] Andersen PA, 2018, IEEE CONF COMPU INTE, P149
  • [3] [Anonymous], 2018, 2018 IEEE C COMPUTAT
  • [4] [Anonymous], THESIS
  • [5] [Anonymous], THESIS
  • [6] [Anonymous], SEBATIK
  • [7] [Anonymous], 2018, ARXIV180701281
  • [8] [Anonymous], THESIS
  • [9] [Anonymous], 2016, Asynchronous methods for deep reinforcement learning
  • [10] [Anonymous], 2015, C EMP METH NAT LANG