On Monte Carlo Tree Search and Reinforcement Learning

被引:42
作者
Vodopivec, Tom [1 ]
Samothrakis, Spyridon [2 ]
Ster, Branko [1 ]
机构
[1] Univ Ljubljana, Fac Comp & Informat Sci, Vecna Pot 113, Ljubljana, Slovenia
[2] Univ Essex, Inst Data Sci & Analyt, Wivenhoe Pk, Colchester CO4 3SQ, Essex, England
关键词
GOOD-REPLY POLICY; GAME; GO;
D O I
10.1613/jair.5507
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search.
引用
收藏
页码:881 / 936
页数:56
相关论文
共 114 条
[1]  
[Anonymous], 2007, P 24 INT C MACH LEAR
[2]  
[Anonymous], P ADV NEUR INF PROC
[3]  
[Anonymous], 2013, AAAI WORKSH LEARN RI
[4]  
[Anonymous], 9 INT C DEV LEARN IC
[5]  
[Anonymous], INT C MACH LEARN ICM
[6]  
[Anonymous], ONLINE PLANNING MDPS
[7]  
[Anonymous], 2014, J ARTIF INTELL RES
[8]  
[Anonymous], IEEE C COMP INT GAM
[9]  
[Anonymous], 2011, P 14 INT C ARTIFICIA
[10]  
[Anonymous], COLT