The evolutionary dynamics of soft-max policy gradient in multi-agent settings

被引:0
|
作者
Bernasconi, Martino [1 ]
Cacciamani, Federico [1 ]
Fioravanti, Simone [2 ]
Gatti, Nicola [1 ]
Trovo, Francesco [1 ]
机构
[1] Politecn Milan, Milan, Italy
[2] Gran Sasso Sci Inst, Laquila, Italy
关键词
Game theory; Evolutionary game theory; Reinforcement learning; Multiagent learning; REINFORCEMENT;
D O I
10.1016/j.tcs.2024.115011
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Policy gradient is one of the most famous algorithms in reinforcement learning. This paper studies the mean dynamics of the soft-max policy gradient algorithm and its properties in multi- agent settings by resorting to evolutionary game theory and dynamical system tools. Unlike most multi-agent reinforcement learning algorithms, whose mean dynamics are a slight variant of the replicator dynamics not affecting the properties of the original dynamics, the soft-max policy gradient dynamics presents a structure significantly different from that of the replicator. In particular, we show that the soft-max policy gradient dynamics in a given game are equivalent to the replicator dynamics in an auxiliary game obtained by a non-convex transformation of the payoffs of the original game. Such a structure gives the dynamics several non-standard properties. The first property we study concerns the convergence to the best response. In particular, while the continuous-time mean dynamics always converge to the best response, the crucial question concerns the convergence speed. Precisely, we show that the space of initializations can be split into two complementary sets such that the trajectories initialized from points of the first set (said good initialization region) directly move to the best response. In contrast, those initialized from points of the second set (said bad initialization region) move first to a series of sub-optimal strategies and then to the best response. Interestingly, in multi-agent adversarial machine learning environments, we show that an adversary can exploit this property to make any current strategy of the learning agent using the soft-max policy gradient fall inside a bad initialization region, thus slowing its learning process and exploiting that policy. When the soft-max policy gradient dynamics is studied in multi-population games, modeling the learning dynamics in self-play, we show that the dynamics preserve the volume of the set of initial points. This property proves that the dynamics cannot converge when the only equilibrium of the game is fully mixed, as the volume of the set of initial points would need to shrink. We also give empirical evidence that the volume expands over time, suggesting that the dynamics in games with fully-mixed equilibrium is chaotic.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Target tracking and obstacle clearing with multi-agent expert strategy gradient
    Sun H.-H.
    Hu C.-H.
    Zhang J.-G.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2022, 39 (10): : 1854 - 1864
  • [42] learning with policy prediction in continuous state-action multi-agent decision processes
    Farzaneh Ghorbani
    Mohsen Afsharchi
    Vali Derhami
    Soft Computing, 2020, 24 : 901 - 918
  • [43] learning with policy prediction in continuous state-action multi-agent decision processes
    Ghorbani, Farzaneh
    Afsharchi, Mohsen
    Derhami, Vali
    SOFT COMPUTING, 2020, 24 (02) : 901 - 918
  • [44] Resource Allocation in Decentralized, Self-organized, Multi-agent Industrial Systems Using Deep Deterministic Policy Gradient
    Vytruchenko, Y.
    Nentwich, C.
    Sauer, M.
    Nickles, J.
    2021 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING AND ENGINEERING MANAGEMENT (IEEE IEEM21), 2021, : 1198 - 1202
  • [45] Reinforcement learning multi-agent system for faults diagnosis of mircoservices in industrial settings
    Belhadi, Asma
    Djenouri, Youcef
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    COMPUTER COMMUNICATIONS, 2021, 177 : 213 - 219
  • [46] Novel task decomposed multi-agent twin delayed deep deterministic policy gradient algorithm for multi-UAV autonomous path planning
    Zhou, Yatong
    Kong, Xiaoran
    Lin, Kuo-Ping
    Liu, Liangyu
    KNOWLEDGE-BASED SYSTEMS, 2024, 287
  • [47] Dynamical systems as a level of cognitive analysis of multi-agent learning Algorithmic foundations of temporal-difference learning dynamics
    Barfuss, Wolfram
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (03): : 1653 - 1671
  • [48] Maximizing Local Rewards on Multi-Agent Quantum Games through Gradient-Based Learning Strategies
    Silva, Agustin
    Zabaleta, Omar Gustavo
    Arizmendi, Constancio Miguel
    Lo Franco, Rosario
    ENTROPY, 2023, 25 (11)
  • [49] Evolutionary Game Theoretic Approach for Optimal Resource Allocation in Multi-Agent Systems
    Sun, Changhao
    Wang, Xiaochu
    Liu, Jiaxin
    2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 5588 - 5592
  • [50] An Evolutionary Game Coordinated Control Approach to Division of Labor in Multi-Agent Systems
    Du, Jinming
    IEEE ACCESS, 2019, 7 : 124295 - 124308