Learning in Bi-level Markov Games

被引:0
作者
Meng, Linghui [1 ,2 ]
Ruan, Jingqing [1 ,2 ]
Xing, Dengpeng [1 ]
Xu, Bo [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
关键词
Reinforcement Learning; Multi-Agent System; Leader-Follower;
D O I
10.1109/IJCNN55064.2022.9892747
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although multi-agent reinforcement learning (MARL) has demonstrated remarkable progress in tackling sophisticated cooperative tasks, the assumption that agents take simultaneous actions still limits the applicability of MARL for many real-world problems. In this work, we relax the assumption by proposing the framework of the bi-level Markov game (BMG). BMG breaks the simultaneity by assigning two players with a leader-follower relationship in which the leader considers the policy of the follower who is taking the best response based on the leader's actions. We propose two provably convergent algorithms to solve BMG: BMG-1 and BMG-2. The former uses the standard Q-learning, while the latter relieves solving the local Stackelberg equilibrium in BMG-1 with the further two-step transition to estimate the state value. For both methods, we consider temporal difference learning techniques with both tabular and neural network representations. To verify the effectiveness of our BMG framework, we test on a series of games, including Seeker, Cooperative Navigation, and Football, that are challenging to existing MARL solvers find challenging to solve: Seeker, Cooperative Navigation, and Football. Experimental results show that our BMG methods achieve competitive advantages in terms of better performance and lower variance.
引用
收藏
页数:8
相关论文
共 31 条
  • [1] Ahilan S., 2019, ARXIV190108492
  • [2] Al-Emran M., 2015, International journal of computing and digital systems, V4
  • [3] Basar T., 1998, Dynamic Noncooperative Game Theory
  • [4] Butler E., 2012, The Condensed Wealth of Nations
  • [5] Feature Control as Intrinsic Motivation for Hierarchical Reinforcement Learning
    Dilokthanakul, Nat
    Kaplanis, Christos
    Pawlowski, Nick
    Shanahan, Murray
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (11) : 3409 - 3418
  • [6] Fiez T., 2020, PR MACH LEARN RES, P3133
  • [7] Foerster JN, 2018, AAAI CONF ARTIF INTE, P2974
  • [8] Jeanson R., 2005, ANIMAL BEHAV
  • [9] Kononen V., 2004, Web Intelligence and Agent Systems, V2, P105
  • [10] Kreidieh A.R., 2019, ARXIV191202368