Multi-agent reinforcement learning based on local communication

被引:2
作者
Wenxu Zhang
Lei Ma
Xiaonan Li
机构
[1] Southwest Jiaotong University,School of Electrical Engineering
来源
Cluster Computing | 2019年 / 22卷
关键词
Reinforcement learning; Multi-agent; Local communication; Consensus;
D O I
暂无
中图分类号
学科分类号
摘要
Aiming at the locality and uncertainty of observations in large-scale multi-agent application scenarios, the model of Decentralized Partially Observable Markov Decision Processes (DEC-POMDP) is considered, and a novel multi-agent reinforcement learning algorithm based on local communication is proposed. For a distributed learning environment, the elements of reinforcement learning are difficult to describe effectively in local observation situation, and the learning behaviour of each individual agent is influenced by its teammates. The local communication with consensus protocol is utilized to agree on the global observing environment, and thus that a part of strategies generated by repeating observations are eliminated, and the agent team gradually reach uniform opinion on the state of the event or object to be observed, they can thus approach a unique belief space regardless of whether each individual agent can perform a complete or partial observation. The simulation results show that the learning strategy space is reduced, and the learning speed is improved.
引用
收藏
页码:15357 / 15366
页数:9
相关论文
共 55 条
  • [51] Yu W(undefined)undefined undefined undefined undefined-undefined
  • [52] Ren W(undefined)undefined undefined undefined undefined-undefined
  • [53] Wu F(undefined)undefined undefined undefined undefined-undefined
  • [54] Zilberstein S(undefined)undefined undefined undefined undefined-undefined
  • [55] Chen X(undefined)undefined undefined undefined undefined-undefined