Multiagent learning using a variable learning rate

被引:477
作者
Bowling, M [1 ]
Veloso, M [1 ]
机构
[1] Carnegie Mellon Univ, Dept Comp Sci, Pittsburgh, PA 15213 USA
关键词
multiagent learning; reinforcement learning; game theory;
D O I
10.1016/S0004-3702(02)00121-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning to act in a multiagent environment is a difficult problem since the normal definition of an optimal policy no longer applies. The optimal policy at any moment depends on the policies of the other agents. This creates a situation of learning a moving target. Previous learning algorithms have one of two shortcomings depending on their approach. They either converge to a policy that may not be optimal against the specific opponents' policies, or they may not converge at all. In this article we examine this learning problem in the framework of stochastic games. We look at a number of previous learning algorithms showing how they fail at one of the above criteria. We then contribute a new reinforcement learning technique using a variable learning rate to overcome these shortcomings. Specifically, we introduce the WoLF principle, "Win or Learn Fast", for varying the learning rate. We examine this technique theoretically, proving convergence in self-play on a restricted class of iterated matrix games. We also present empirical results on a variety of more general stochastic games, in situations of self-play and otherwise, demonstrating the wide applicability of this method. (C) 2002 Published by Elsevier Science B.V.
引用
收藏
页码:215 / 250
页数:36
相关论文
共 36 条
[1]  
[Anonymous], 2000, P 17 INT C MACHINE L
[2]  
BAIRD LC, 1999, ADV NEURAL INFORMATI, V11
[3]  
Bellman R., 1957, DYNAMIC PROGRAMMING
[4]  
BLUM A, 1997, P 10 ANN C COMP LEAR
[5]  
BOWLING M, 2000, CMUCS00165 COMP SCI
[6]  
Bowling M, 2001, PROC 18 INTERNAT C M, P27
[7]  
Bowling M, 2001, P 17 INT JOINT C ART, V17, P1021
[8]  
Bowling M. H., 2000, ICML, P89
[9]  
CLAUS C, 1998, P AAAI98 MAD WI AAAI
[10]  
Filar J., 1997, COMPETITIVE MARKOV D