Gym Hero: A Research Environment for Reinforcement Learning Agents in Rhythm Games

被引:0
作者
Ferrer Filho, Romulo Freire [1 ]
Barbosa Nogueira, Yuri Lenon [2 ]
Vidal, Creto Augusto [2 ]
Cavalcante-Neto, Joaquim Bento [2 ]
de Sousa Serafim, Paulo Bruno [3 ]
机构
[1] Fed Univ Ceara UFC, Teleinformat Engn Dept, Fortaleza, Ceara, Brazil
[2] Fed Univ Ceara UFC, Dept Comp DC, Fortaleza, Ceara, Brazil
[3] Inst Atlantico, Fortaleza, Ceara, Brazil
来源
2021 20TH BRAZILIAN SYMPOSIUM ON COMPUTER GAMES AND DIGITAL ENTERTAINMENT (SBGAMES 2021) | 2021年
关键词
autonomous agents; reinforcement learning; deep learning; reinforcement learning environments; rhythm games; guitar hero; DEEP; LEVEL;
D O I
10.1109/SBGames54170.2021.00020
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
This work presents a Reinforcement Learning environment, called Gym Hero, based on the game Guitar Hero. It consists of a similar game implementation, developed using the graphics engine PyGame, with four difficulty levels, and able to randomly generate tracks. On top of the game, we implemented a Gym environment to train and evaluate Reinforcement Learning agents. In order to assess the environment's capacity as a suitable learning tool, we ran a set of experiments to train three autonomous agents using Deep Reinforcement Learning. Each agent was trained on a different level using Deep Q-Networks, a technique that combines Reinforcement Learning with Deep Neural Networks. The input of the network is only the pixels of the screen. We show that the agents were capable of learning the expected behaviors to play the game. The obtained results validate the proposed environment as capable of evaluating autonomous agents on Reinforcement Learning tasks.
引用
收藏
页码:87 / 96
页数:10
相关论文
共 29 条
  • [1] Adams E., 2009, Fundamentals of Game Design, V2nd
  • [2] Learning Deep Architectures for AI
    Bengio, Yoshua
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 2 (01): : 1 - 127
  • [3] Breingan CR, 2012, IEEE SOUTHEASTCON
  • [4] Brockman Greg, 2016, arXiv
  • [5] Deep blue
    Campbell, M
    Hoane, AJ
    Hsu, FH
    [J]. ARTIFICIAL INTELLIGENCE, 2002, 134 (1-2) : 57 - 83
  • [6] Cavalcante-Neto J. B, 2020, P 19 BRAZ S COMP GAM, P1
  • [7] Ganapati P, 2007, GUITAR HERO ROBOT PL
  • [8] Glorot X., 2010, P 13 INT C ART INT S, P249
  • [9] Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit
    Hahnloser, RHR
    Sarpeshkar, R
    Mahowald, MA
    Douglas, RJ
    Seung, HS
    [J]. NATURE, 2000, 405 (6789) : 947 - 951
  • [10] Juliani A., 2018, ARXIVABS180902627