Learning in Games via Reinforcement and Regularization

被引:92
作者
Mertikopoulos, Panayotis [1 ,2 ]
Sandholm, William H. [3 ]
机构
[1] CNRS, French Natl Ctr Sci Res, LIG, F-38000 Grenoble, France
[2] Univ Grenoble Alpes, LIG, F-38000 Grenoble, France
[3] Univ Wisconsin, Dept Econ, Madison, WI 53706 USA
基金
美国国家科学基金会;
关键词
Bregman divergence; dominated strategies; equilibrium stability; Fenchel coupling; penalty functions; projection dynamics; regularization; reinforcement learning; replicator dynamics; time averages; DYNAMICAL-SYSTEMS; CONVERGENCE; REPLICATOR; STABILITY; GEOMETRY;
D O I
10.1287/moor.2016.0778
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We investigate a class of reinforcement learning dynamics where players adjust their strategies based on their actions' cumulative payoffs over time-specifically, by playing mixed strategies that maximize their expected cumulative payoff minus a regularization term. A widely studied example is exponential reinforcement learning, a process induced by an entropic regularization term which leads mixed strategies to evolve according to the replicator dynamics. However, in contrast to the class of regularization functions used to define smooth best responses in models of stochastic fictitious play, the functions used in this paper need not be infinitely steep at the boundary of the simplex; in fact, dropping this requirement gives rise to an important dichotomy between steep and nonsteep cases. In this general framework, we extend several properties of exponential learning, including the elimination of dominated strategies, the asymptotic stability of strict Nash equilibria, and the convergence of time-averaged trajectories in zero-sum games with an interior Nash equilibrium.
引用
收藏
页码:1297 / 1324
页数:28
相关论文
共 62 条
[1]   DOMINATION OR EQUILIBRIUM [J].
AKIN, E .
MATHEMATICAL BIOSCIENCES, 1980, 50 (3-4) :239-250
[2]   Hessian Riemannian gradient flows in convex programming [J].
Alvarez, F ;
Bolte, J ;
Brahic, O .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2004, 43 (02) :477-501
[3]  
[Anonymous], 1990, Appl. Math. Lett., DOI DOI 10.1016/0893-9659(90)90051-C
[4]  
[Anonymous], 2010, Population Games and Evolutionary Dynamics
[5]  
[Anonymous], 1998, THEORY LEARNING GAME
[6]  
[Anonymous], CONTINUOUS TIME APPR
[7]  
[Anonymous], 1998, EVOLUTIONARY GAMES P
[8]   THE NONLINEAR GEOMETRY OF LINEAR-PROGRAMMING .1. AFFINE AND PROJECTIVE SCALING TRAJECTORIES [J].
BAYER, DA ;
LAGARIAS, JC .
TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 1989, 314 (02) :499-526
[9]   Mirror descent and nonlinear projected subgradient methods for convex optimization [J].
Beck, A ;
Teboulle, M .
OPERATIONS RESEARCH LETTERS, 2003, 31 (03) :167-175
[10]   On the convergence of reinforcement learning [J].
Beggs, AW .
JOURNAL OF ECONOMIC THEORY, 2005, 122 (01) :1-36