V-Learning-A Simple, Efficient, Decentralized Algorithm for Multiagent Reinforcement Learning

被引:0
|
作者
Jin, Chi [1 ]
Liu, Qinghua [1 ]
Wang, Yuanhao [2 ]
Yu, Tiancheng [3 ]
机构
[1] Princeton Univ, Dept Elect & Comp Engn, Princeton, NJ 08544 USA
[2] Princeton Univ, Dept Comp Sci, Princeton, NJ 08544 USA
[3] MIT, Dept Elect & Comp Engn, Cambridge, MA 02139 USA
关键词
V-learning; Markov games; multiagent reinforcement learning; decentralized reinforcement learning; Nash equilibria; (coarse) correlated equilibria; GAMES; GO;
D O I
10.1287/moor.2021.0317
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
A major challenge of multiagent reinforcement learning (MARL) is the curse of multiagents, where the size of the joint action space scales exponentially with the number of agents. This remains to be a bottleneck for designing efficient MARL algorithms, even in a basic scenario with finitely many states and actions. This paper resolves this challenge for the model of episodic Markov games. We design a new class of fully decentralized algorithms-V-learning, which provably learns Nash equilibria (in the two-player zero-sum setting), correlated equilibria, and coarse correlated equilibria (in the multiplayer general-sum setting) in a number of samples that only scales with max(i is an element of[m])A(i), where A(i) is the number of actions for the ith player. This is in sharp contrast to the size of the joint action space, which is Pi(m)(i=1) A(i). V-learning (in its basic form) is a new class of single-agent reinforcement learning (RL) algorithms that convert any adversarial bandit algorithm with suitable regret guarantees into an RL algorithm. Similar to the classical Q-learning algorithm, it performs incremental updates to the value functions. Different from Q-learning, it only maintains the estimates of V-values instead of Q-values. This key difference allows V-learning to achieve the claimed guarantees in the MARL setting by simply letting all agents run V-learning independently.
引用
收藏
页码:2295 / 2322
页数:28
相关论文
共 50 条
  • [31] Constrained Multiagent Reinforcement Learning for Large Agent Population
    Ling, Jiajing
    Singh, Arambam James
    Thien, Nguyen Duc
    Kumar, Akshat
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT IV, 2023, 13716 : 183 - 199
  • [32] Deep Decentralized Reinforcement Learning for Cooperative Control
    Koepf, Florian
    Tesfazgi, Samuel
    Flad, Michael
    Hohmann, Soeren
    IFAC PAPERSONLINE, 2020, 53 (02): : 1555 - 1562
  • [33] ASN: action semantics network for multiagent reinforcement learning
    Yang, Tianpei
    Wang, Weixun
    Hao, Jianye
    Taylor, Matthew E.
    Liu, Yong
    Hao, Xiaotian
    Hu, Yujing
    Chen, Yingfeng
    Fan, Changjie
    Ren, Chunxu
    Huang, Ye
    Zhu, Jiangcheng
    Gao, Yang
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2023, 37 (02)
  • [34] Domain-Aware Multiagent Reinforcement Learning in Navigation
    Saeed, Ifrah
    Cullen, Andrew C.
    Erfani, Sarah
    Alpcan, Tansu
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [35] An Online Distributed Satellite Cooperative Observation Scheduling Algorithm Based on Multiagent Deep Reinforcement Learning
    Li Dalin
    Wang Haijiao
    Yang Zhen
    Gu Yanfeng
    Shen Shi
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2021, 18 (11) : 1901 - 1905
  • [36] A Proactive Eavesdropping Game in MIMO Systems Based on Multiagent Deep Reinforcement Learning
    Guo, Delin
    Ding, Hui
    Tang, Lan
    Zhang, Xinggan
    Yang, Lvxi
    Liang, Ying-Chang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (11) : 8889 - 8904
  • [37] Adaptive Individual Q-Learning-A Multiagent Reinforcement Learning Method for Coordination Optimization
    Zhang, Zhen
    Wang, Dongqing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 12
  • [38] Hierarchical multiagent reinforcement learning schemes for air traffic management
    Spatharis, Christos
    Bastas, Alevizos
    Kravaris, Theocharis
    Blekas, Konstantinos
    Vouros, George A.
    Manuel Cordero, Jose
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (01) : 147 - 159
  • [39] Accelerating decentralized reinforcement learning of complex individual behaviors
    Leottau, David L.
    Lobos-Tsunekawa, Kenzo
    Jaramillo, Francisco
    Ruiz-del-Solar, Javier
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2019, 85 : 243 - 253
  • [40] Potential-Based Difference Rewards for Multiagent Reinforcement Learning
    Devlin, Sam
    Yliniemi, Logan
    Kudenko, Daniel
    Tumer, Kagan
    AAMAS'14: PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS, 2014, : 165 - 172