Coordination as inference in multi-agent reinforcement learning

被引:4
作者
Li, Zhiyuan [1 ]
Wu, Lijun [1 ]
Su, Kaile [2 ]
Wu, Wei [3 ,4 ]
Jing, Yulin [1 ]
Wu, Tong [1 ]
Duan, Weiwei [1 ]
Yue, Xiaofeng [1 ]
Tong, Xiyi [5 ]
Han, Yizhou [6 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu, Peoples R China
[2] Griffith Univ, Sch Informat & Commun Technol, Brisbane, Australia
[3] Cent South Univ, Sch Comp Sci & Engn, Changsha, Peoples R China
[4] Xiangjiang Lab, Changsha, Peoples R China
[5] Sichuan Univ Pittsburgh Inst, Chengdu, Peoples R China
[6] Univ Glasgow, Glasgow Int Coll, Glasgow City, Scotland
关键词
Multi-agent System; Deep reinforcement learning; Non-stationary; Variational inference; Causal inference; Theory of mind; MECHANISMS;
D O I
10.1016/j.neunet.2024.106101
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Centralized Training and Decentralized Execution (CTDE) paradigm, where a centralized critic is allowed to access global information during the training phase while maintaining the learned policies executed with only local information in a decentralized way, has achieved great progress in recent years. Despite the progress, CTDE may suffer from the issue of Centralized-Decentralized Mismatch (CDM): the suboptimality of one agent's policy can exacerbate policy learning of other agents through the centralized joint critic. In contrast to centralized learning, the cooperative model that most closely resembles the way humans cooperate in nature is fully decentralized, i.e. Independent Learning (IL). However, there are still two issues that need to be addressed before agents coordinate through IL: (1) how agents are aware of the presence of other agents, and (2) how to coordinate with other agents to improve joint policy under IL. In this paper, we propose an inference -based coordinated MARL method: Deep Motor System (DMS). DMS first presents the idea of individual intention inference where agents are allowed to disentangle other agents from their environment. Secondly, causal inference was introduced to enhance coordination by reasoning each agent's effect on others' behavior. The proposed model was extensively experimented on a series of Multi -Agent MuJoCo and StarCraftII tasks. Results show that the proposed method outperforms independent learning algorithms and the coordination behavior among agents can be learned even without the CTDE paradigm compared to the state-of-the-art baselines including IPPO and HAPPO.
引用
收藏
页数:13
相关论文
共 53 条
[1]  
Ba J, 2014, ACS SYM SER
[2]  
Ba J.L., 2016, arXiv preprint arXiv:1607.06450, DOI DOI 10.48550/ARXIV.1607.06450
[3]  
Böhmer W, 2019, 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019)
[4]  
Cho KYHY, 2014, Arxiv, DOI [arXiv:1406.1078, 10.48550/ARXIV.1406.1078]
[5]  
de Witt CAS, 2019, ADV NEUR IN, V32
[6]  
Ding Ziluo, 2020, Advances in Neural Information Processing Systems, V33
[7]  
Dosovitskiy Alexey, 2021, ICLR
[8]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[9]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[10]  
Inala Jeevana Priya, 2020, Advances in Neural Information Processing Systems, V33, P13597