A reinforcement learning scheme for a partially-observable multi-agent game

被引:15
作者
Ishii, S
Fujita, H
Mitsutake, M
Yamazaki, T
Matsuda, J
Matsuno, Y
机构
[1] CREST, Japan Sci & Technol Agcy, Nara Inst Sci & Technol, Ikoma 6300192, Japan
[2] Natl Inst Informat & Commun Technol, Kyoto 6190289, Japan
[3] Osaka Gakuin Univ, Suita, Osaka 5648511, Japan
[4] Ricoh Co Ltd, Tokyo 1120002, Japan
基金
日本学术振兴会;
关键词
reinforcement learning; POMDP; multi-agent system; card game; model-based;
D O I
10.1007/s10994-005-0461-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We formulate an automatic strategy acquisition problem for the multi-agent card game "Hearts" as a reinforcement learning problem. The problem can approximately be dealt with in the framework of a partially observable Markov decision process (POMDP) for a single-agent system. Hearts is an example of imperfect information games, which are more difficult to deal with than perfect information games. A POMDP is a decision problem that includes a process for estimating unobservable state variables. By regarding missing information as unobservable state variables, an imperfect information game can be formulated as a POMDP. However, the game of Hearts is a realistic problem that has a huge number of possible states, even when it is approximated as a single-agent system. Therefore, further approximation is necessary to make the strategy acquisition problem tractable. This article presents an approximation method based on estimating unobservable state variables and predicting the actions of the other agents. Simulation results show that our reinforcement learning method is applicable to such a difficult multi-agent problem.
引用
收藏
页码:31 / 54
页数:24
相关论文
共 23 条
[1]   NEURONLIKE ADAPTIVE ELEMENTS THAT CAN SOLVE DIFFICULT LEARNING CONTROL-PROBLEMS [J].
BARTO, AG ;
SUTTON, RS ;
ANDERSON, CW .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1983, 13 (05) :834-846
[2]  
BLAIR JRS, 1995, COMPUT INTELL, V12, P131
[3]   Elevator group control using multiple reinforcement learning agents [J].
Crites, RH ;
Barto, AG .
MACHINE LEARNING, 1998, 33 (2-3) :235-262
[4]  
CRITES RH, 1996, THESIS U MASSACHUSET
[5]   GIB: Imperfect information in a computationally challenging game [J].
Ginsberg, ML .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2001, 14 :303-358
[6]  
HU J, 1998, P 15 INT C MACH LEAR, P242
[7]   Control of exploitation-exploration meta-parameter in reinforcement learning [J].
Ishii, S ;
Yoshida, W ;
Yoshimoto, J .
NEURAL NETWORKS, 2002, 15 (4-6) :665-687
[8]   Planning and acting in partially observable stochastic domains [J].
Kaelbling, LP ;
Littman, ML ;
Cassandra, AR .
ARTIFICIAL INTELLIGENCE, 1998, 101 (1-2) :99-134
[9]  
Lin L-J, 1992, MEMORY APPROACHES RE
[10]  
LITTMAN ML, 1994, P 11 INT C MACH LEAR, P157