Cooperative Multi-Agent Reinforcement Learning With Approximate Model Learning

被引:11
作者
Park, Young Joon [1 ]
Lee, Young Jae [1 ]
Kim, Seoung Bum [1 ]
机构
[1] Korea Univ, Sch Ind Management Engn, Seoul 02841, South Korea
基金
新加坡国家研究基金会;
关键词
Reinforcement learning; model-free method; multi-agent system; multi-agent cooperation; actor-critic method; deterministic policy gradient; DYNAMICS; PERFORMANCE; FRAMEWORK;
D O I
10.1109/ACCESS.2020.3007219
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In multi-agent reinforcement learning, it is essential for agents to learn communication protocol to optimize collaboration policies and to solve unstable learning problems. Existing methods based on actor-critic networks solve the communication problem among agents. However, these methods have difficulty in improving sample efficiency and learning robust policies because it is not easy to understand the dynamics and nonstationary of the environment as the policies of other agents change. We propose a method for learning cooperative policies in multi-agent environments by considering the communications among agents. The proposed method consists of recurrent neural network-based actor-critic networks and deterministic policy gradients to centrally train decentralized policies. The actor networks cause the agents to communicate using forward and backward paths and to determine subsequent actions. The critic network helps to train the actor networks by sending gradient signals to the actors according to their contribution to the global reward. To address issues with partial observability and unstable learning, we propose using auxiliary prediction networks to approximate state transitions and the reward function. We used multi-agent environments to demonstrate the usefulness and superiority of the proposed method by comparing it with existing multi-agent reinforcement learning methods, in terms of both learning efficiency and goal achievements in the test phase. The results demonstrate that the proposed method outperformed other alternatives.
引用
收藏
页码:125389 / 125400
页数:12
相关论文
共 46 条
[1]   Modeling and Planning with Macro-Actions in Decentralized POMDPs [J].
Amato, Christopher ;
Konidaris, George ;
Kaelbling, Leslie P. ;
How, Jonathan P. .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2019, 64 :817-859
[2]   Learning Partially Observable Deterministic Action Models [J].
Amir, Eyal ;
Chang, Allen .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2008, 33 :349-402
[3]  
Baisero A., 2018, P REINF LEARN PART O, P1
[4]   Computer Network Simulation of Firewall and VoIP Performance Monitoring [J].
Barznji, Ammar O. ;
Rashid, Tank A. ;
Al-Salihi, Nawzad K. .
INTERNATIONAL JOURNAL OF ONLINE ENGINEERING, 2018, 14 (09) :4-18
[5]   Evolutionary Dynamics of Multi-Agent Learning: A Survey [J].
Bloembergen, Daan ;
Tuyls, Karl ;
Hennes, Daniel ;
Kaisers, Michael .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2015, 53 :659-697
[6]   An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination [J].
Cao, Yongcan ;
Yu, Wenwu ;
Ren, Wei ;
Chen, Guanrong .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2013, 9 (01) :427-438
[7]  
Chang YH, 2004, ADV NEUR IN, V16, P807
[8]  
Chebotar Y., 2017, ARXIV170303078
[9]  
Cornell D., 2018, P ICML, P1
[10]  
Foerster JN, 2016, ADV NEUR IN, V29