When Does Communication Learning Need Hierarchical Multi-Agent Deep Reinforcement Learning

被引:2
|
作者
Ossenkopf, Marie [1 ]
Jorgensen, Mackenzie [2 ]
Geihs, Kurt [1 ]
机构
[1] Univ Kassel, Distributed Syst Grp, Wilhelmshoeher Allee 73, D-34121 Kassel, Germany
[2] Villanova Univ, Comp Sci, Villanova, PA 19085 USA
关键词
Agent communication; deep reinforcement learning; hierarchical learning; multi-agent systems;
D O I
10.1080/01969722.2019.1677335
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-agent systems need to communicate to coordinate a shared task. We show that a recurrent neural network (RNN) can learn a communication protocol for coordination, even if the actions to coordinate are performed steps after the communication phase. We show that a separation of tasks with different temporal scale is necessary for successful learning. We contribute a hierarchical deep reinforcement learning model for multi-agent systems that separates the communication and coordination task from the action picking through a hierarchical policy. We further on show, that a separation of concerns in communication is beneficial but not necessary. As a testbed, we propose the Dungeon Lever Game and we extend the Differentiable Inter-Agent Learning (DIAL) framework. We present and compare results from different model variations on the Dungeon Lever Game.
引用
收藏
页码:672 / 692
页数:21
相关论文
共 50 条
  • [31] Multi-agent communication cooperation based on deep reinforcement learning and information theory
    Gao, Bing
    Zhang, Zhejie
    Zou, Qijie
    Liu, Zhiguo
    Zhao, Xiling
    Hangkong Xuebao/Acta Aeronautica et Astronautica Sinica, 2024, 45 (18):
  • [32] MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement Learning
    Malysheva, Aleksandra
    Kudenko, Daniel
    Shpilman, Aleksei
    2019 XVI INTERNATIONAL SYMPOSIUM PROBLEMS OF REDUNDANCY IN INFORMATION AND CONTROL SYSTEMS (REDUNDANCY), 2019, : 171 - 176
  • [33] Multi-agent reinforcement learning based on local communication
    Zhang, Wenxu
    Ma, Lei
    Li, Xiaonan
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 6): : 15357 - 15366
  • [34] Improving coordination with communication in multi-agent reinforcement learning
    Szer, D
    Charpillet, F
    ICTAI 2004: 16TH IEEE INTERNATIONALCONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, : 436 - 440
  • [35] Multi-Agent Reinforcement Learning for Coordinating Communication and Control
    Mason, Federico
    Chiariotti, Federico
    Zanella, Andrea
    Popovski, Petar
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (04) : 1566 - 1581
  • [36] Universally Expressive Communication in Multi-Agent Reinforcement Learning
    Morris, Matthew
    Barrett, Thomas D.
    Pretorius, Arnu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [37] Low Entropy Communication in Multi-Agent Reinforcement Learning
    Yu, Lebin
    Qiu, Yunbo
    Wang, Qiexiang
    Zhang, Xudong
    Wang, Jian
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5173 - 5178
  • [38] Multi-agent reinforcement learning based on local communication
    Wenxu Zhang
    Lei Ma
    Xiaonan Li
    Cluster Computing, 2019, 22 : 15357 - 15366
  • [39] Biases for Emergent Communication in Multi-agent Reinforcement Learning
    Eccles, Tom
    Bachrach, Yoram
    Lever, Guy
    Lazaridou, Angeliki
    Graepel, Thore
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [40] GHGC: Goal-based Hierarchical Group Communication in Multi-Agent Reinforcement Learning
    Jiang, Hao
    Shi, Dianxi
    Xue, Chao
    Wang, Yajie
    Wang, Gongju
    Zhang, Yongjun
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3507 - 3514