Q-Learning with Naive Bayes Approach Towards More Engaging Game Agents

被引:0
作者
Yilmaz, Osman [1 ]
Celikcan, Ufuk [1 ]
机构
[1] Hacettepe Univ, Dept Comp Engn, Ankara, Turkey
来源
2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND DATA PROCESSING (IDAP) | 2018年
关键词
Game AI; Reinforcement Learning; Q-Learning; Naive Bayes; Engaging Gameplay;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the goals of modern game programming is adapting the life-like characteristics and concepts into games. This approach is adopted to offer game agents that exhibit more engaging behavior. Methods that prioritize reward maximization cause the game agent to go into same patterns and lead to repetitive gaming experience, as well as reduced playability. In order to prevent such repetitive patterns, we explore a behavior algorithm based on Q-learning with a Naive Bayes approach. The algorithm is validated in a formal user study in contrast to a benchmark. The results of the study demonstrate that the algorithm outperforms the benchmark and the game agent becomes more engaging as the amount of gameplay data, from which the algorithm learns, increases.
引用
收藏
页数:6
相关论文
共 19 条
  • [1] [Anonymous], 2000, P 17 INT C MACH LEAR
  • [2] Evolutionary Dynamics of Multi-Agent Learning: A Survey
    Bloembergen, Daan
    Tuyls, Karl
    Hennes, Daniel
    Kaisers, Michael
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2015, 53 : 659 - 697
  • [3] Bonse R., 2004, SCIENCE, P1
  • [4] Learning from imprecise data: possibilistic graphical models
    Borgelt, C
    Kruse, R
    [J]. COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2002, 38 (04) : 449 - 463
  • [5] Brockman Greg, 2016, arXiv
  • [6] Cao F., 2012, ADV NEURAL INFORM PR, V25, P73
  • [7] Hierarchical reinforcement learning with the MAXQ value function decomposition
    Dietterich, TG
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2000, 13 : 227 - 303
  • [8] Harmon M. E., 1996, WL/AAFC, WPAFB Ohio, V45433, P237
  • [9] Klockner A., 2013, AIAA GUID NAV CONTR
  • [10] Kober J., 2015, INT J ROBOT RES, V32, P1238