A Decentralized Communication Framework Based on Dual-Level Recurrence for Multiagent Reinforcement Learning

被引:3
|
作者
Li, Xuesi [1 ]
Li, Jingchen [1 ]
Shi, Haobin [1 ]
Hwang, Kao-Shing [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci & Engn, Xian 710129, Shaanxi, Peoples R China
[2] Natl Sun Yat Sen Univ, Dept Elect Engn, Kaohsiung 804, Taiwan
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Logic gates; Training; Adaptation models; Multi-agent systems; Task analysis; Decision making; Gated recurrent network; multiagent reinforcement learning; multiagent system;
D O I
10.1109/TCDS.2023.3281878
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Designing communication channels for multiagent is a feasible method to conduct decentralized learning, especially in partially observable environments or large-scale multiagent systems. In this work, a communication model with dual-level recurrence is developed to provide a more efficient communication mechanism for the multiagent reinforcement learning field. The communications are conducted by a gated-attention-based recurrent network, in which the historical states are taken into account and regarded as the second-level recurrence. We separate communication messages from memories in the recurrent model so that the proposed communication flow can adapt changeable communication objects in the case of limited communication, and the communication results are fair to every agent. We provide a sufficient discussion about our method in both partially observable and fully observable environments. The results of several experiments suggest our method outperforms the existing decentralized communication frameworks and the corresponding centralized training method.
引用
收藏
页码:640 / 649
页数:10
相关论文
共 50 条
  • [1] Attentive Relational State Representation in Decentralized Multiagent Reinforcement Learning
    Liu, Xiangyu
    Tan, Ying
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (01) : 252 - 264
  • [2] Adaptive Learning: A New Decentralized Reinforcement Learning Approach for Cooperative Multiagent Systems
    Li, Meng-Lin
    Chen, Shaofei
    Chen, Jing
    IEEE ACCESS, 2020, 8 : 99404 - 99421
  • [3] Decentralized Reinforcement Learning Inspired by Multiagent Systems
    Adjodah, Dhaval
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 1729 - 1730
  • [4] CuMARL: Curiosity-Based Learning in Multiagent Reinforcement Learning
    Ningombam, Devarani Devi
    Yoo, Byunghyun
    Kim, Hyun Woo
    Song, Hwa Jeon
    Yi, Sungwon
    IEEE ACCESS, 2022, 10 : 87254 - 87265
  • [5] Formation Tracking of Spatiotemporal Multiagent Systems: A Decentralized Reinforcement Learning Approach
    Liu, Tianrun
    Chen, Yang-Yang
    IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE, 2024, 10 (04): : 52 - 60
  • [6] GCMA: An Adaptive Multiagent Reinforcement Learning Framework With Group Communication for Complex and Similar Tasks Coordination
    Peng, Kexing
    Ma, Tinghuai
    Yu, Xin
    Rong, Huan
    Qian, Yurong
    Al-Nabhan, Najla
    IEEE TRANSACTIONS ON GAMES, 2024, 16 (03) : 670 - 682
  • [7] Model-based Reinforcement Learning for Decentralized Multiagent Rendezvous
    Wang, Rose E.
    Kew, J. Chase
    Lee, Dennis
    Lee, Tsang-Wei Edward
    Zhang, Tingnan
    Ichter, Brian
    Tan, Jie
    Faust, Aleksandra
    CONFERENCE ON ROBOT LEARNING, VOL 155, 2020, 155 : 711 - 725
  • [8] An adaptive dual-level reinforcement learning approach for optimal trade execution
    Kim, Soohan
    Kim, Jimyeong
    Sul, Hong Kee
    Hong, Youngjoon
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 252
  • [9] Neighborhood-Curiosity-Based Exploration in Multiagent Reinforcement Learning
    Yang, Shike
    He, Ziming
    Li, Jingchen
    Shi, Haobin
    Ji, Qingbing
    Hwang, Kao-Shing
    Li, Xianshan
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2025, 17 (02) : 379 - 389
  • [10] CTDS: Centralized Teacher With Decentralized Student for Multiagent Reinforcement Learning
    Zhao, Jian
    Hu, Xunhan
    Yang, Mingyu
    Zhou, Wengang
    Zhu, Jiangcheng
    Li, Houqiang
    IEEE TRANSACTIONS ON GAMES, 2024, 16 (01) : 140 - 150