Can Monte-Carlo Tree Search learn to sacrifice?

被引:0
作者
Nathan Companez
Aldeida Aleti
机构
[1] Monash University,Faculty of Information Technology
来源
Journal of Heuristics | 2016年 / 22卷
关键词
Monte-Carlo Tree Search; Sacrifice moves; Artificial intelligence; Games;
D O I
暂无
中图分类号
学科分类号
摘要
One of the most basic activities performed by an intelligent agent is deciding what to do next. The decision is usually about selecting the move with the highest expectation, or exploring new scenarios. Monte-Carlo Tree Search (MCTS), which was developed as a game playing agent, deals with this exploration–exploitation ‘dilemma’ using a multi-armed bandits strategy. The success of MCTS in a wide range of problems, such as combinatorial optimisation, reinforcement learning, and games, is due to its ability to rapidly evaluate problem states without requiring domain-specific knowledge. However, it has been acknowledged that the trade-off between exploration and exploitation is crucial for the performance of the algorithm, and affects the efficiency of the agent in learning deceptive states. One type of deception is states that give immediate rewards, but lead to a suboptimal solution in the long run. These states are known as trap states, and have been thoroughly investigated in previous research. In this work, we study the opposite of trap states, known as sacrifice states, which are deceptive moves that result in a local loss but are globally optimal, and investigate the efficiency of MCTS enhancements in identifying this type of moves.
引用
收藏
页码:783 / 813
页数:30
相关论文
共 47 条
  • [1] Arneson B(2009)A survey of monte carlo tree search methods Mohex wins hex tournament. ICGA J. 32 114-43
  • [2] Ryan H(2012)FUEGO-an open-source framework for board games and Go engine based on Monte-Carlo tree search IEEE Trans. Comput. Intell. AI Games 4 1-270
  • [3] Henderson P(2010)Muller, Martin, Arneson, Broderick, Segal, Richard: Fuego-an open-source framework for board games and go engine based on monte carlo tree search IEEE Trans. Comput. Intell. AI Games 2 259-270
  • [4] Browne CB(2010)Monte-carlo tree search and rapid action value estimation in computer go IEEE Trans. Comput. Intell. AIGames 2 259-1875
  • [5] Powley E(2011)Finding optimal satisficing strategies for and-or trees Artif. Intell. 175 1856-58
  • [6] Whitehouse D(2006)Monte Carlo tree search in hex Artif. Intell. 170 19-258
  • [7] Lucas SM(2010)General game-playing and reinforcement learning IEEE Trans. Comput. Intell. AI Games 2 251-176
  • [8] Cowling PI(1996)A strategic metagame player for general chess-like games Comput. Intell. 12 155-198
  • [9] Rohlfshagen P(1996)The vehicle routing problem with time windows part ii: genetic search Comput. Intell. 12 177-172
  • [10] Tavener S(1996)A shogi program based on monte-carlo tree search INFORMS J. Comput. 8 165-92