Learning in Games via Reinforcement and Regularization

被引:83
|
作者
Mertikopoulos, Panayotis [1 ,2 ]
Sandholm, William H. [3 ]
机构
[1] CNRS, French Natl Ctr Sci Res, LIG, F-38000 Grenoble, France
[2] Univ Grenoble Alpes, LIG, F-38000 Grenoble, France
[3] Univ Wisconsin, Dept Econ, Madison, WI 53706 USA
基金
美国国家科学基金会;
关键词
Bregman divergence; dominated strategies; equilibrium stability; Fenchel coupling; penalty functions; projection dynamics; regularization; reinforcement learning; replicator dynamics; time averages; DYNAMICAL-SYSTEMS; CONVERGENCE; REPLICATOR; STABILITY; GEOMETRY;
D O I
10.1287/moor.2016.0778
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We investigate a class of reinforcement learning dynamics where players adjust their strategies based on their actions' cumulative payoffs over time-specifically, by playing mixed strategies that maximize their expected cumulative payoff minus a regularization term. A widely studied example is exponential reinforcement learning, a process induced by an entropic regularization term which leads mixed strategies to evolve according to the replicator dynamics. However, in contrast to the class of regularization functions used to define smooth best responses in models of stochastic fictitious play, the functions used in this paper need not be infinitely steep at the boundary of the simplex; in fact, dropping this requirement gives rise to an important dichotomy between steep and nonsteep cases. In this general framework, we extend several properties of exponential learning, including the elimination of dominated strategies, the asymptotic stability of strict Nash equilibria, and the convergence of time-averaged trajectories in zero-sum games with an interior Nash equilibrium.
引用
收藏
页码:1297 / 1324
页数:28
相关论文
共 50 条
  • [11] Tabular Reinforcement Learning in Real-Time Strategy Games via Options
    Tavares, Anderson R.
    Chaimowicz, Luiz
    PROCEEDINGS OF THE 2018 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND GAMES (CIG'18), 2018, : 229 - 236
  • [12] A modeling environment for reinforcement learning in games
    Gomes, Gilzamir
    Vidal, Creto A.
    Cavalcante-Neto, Joaquim B.
    Nogueira, Yuri L. B.
    ENTERTAINMENT COMPUTING, 2022, 43
  • [13] Deep Reinforcement Learning and Influenced Games
    Brady, C.
    Gonen, R.
    Rabinovich, G.
    IEEE ACCESS, 2024, 12 : 114086 - 114099
  • [14] Baselines for Reinforcement Learning in Text Games
    Zelinka, Mikulas
    2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2018, : 320 - 327
  • [15] ARFace: Attention-Aware and Regularization for Face Recognition With Reinforcement Learning
    Zhang, Liping
    Sun, Linjun
    Yu, Lina
    Dong, Xiaoli
    Chen, Jinchao
    Cai, Weiwei
    Wang, Chen
    Ning, Xin
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2022, 4 (01): : 30 - 42
  • [16] Reinforcement learning with foregone payoff information in normal form games
    Funai, Naoki
    JOURNAL OF ECONOMIC BEHAVIOR & ORGANIZATION, 2022, 200 : 638 - 660
  • [17] CHOQUET REGULARIZATION FOR CONTINUOUS-TIME REINFORCEMENT LEARNING
    Han, Xia
    Wang, Ruodu
    Zhou, Xun Yu
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2023, 61 (05) : 2777 - 2801
  • [18] Parallelization of Reinforcement Learning Algorithms for Video Games
    Kopel, Marek
    Szczurek, Witold
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2021, 2021, 12672 : 195 - 207
  • [19] Modeling Decisions in Games Using Reinforcement Learning
    Singal, Himanshu
    Aggarwal, Palvi
    Dutt, Varun
    2017 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND DATA SCIENCE (MLDS 2017), 2017, : 98 - 105
  • [20] Distilling Reinforcement Learning Tricks for Video Games
    Kanervisto, Anssi
    Scheller, Christian
    Schraner, Yanick
    Hautamaki, Ville
    2021 IEEE CONFERENCE ON GAMES (COG), 2021, : 1088 - 1091