Multi-Agent Reinforcement Learning is A Sequence Modeling Problem

被引:0
作者
Wen, Muning [1 ,2 ]
Kuba, Jakub Grudzien [3 ]
Lin, Runji [4 ]
Zhang, Weinan [1 ]
Wen, Ying [1 ]
Wang, Jun [2 ,5 ]
Yang, Yaodong [6 ,7 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Digital Brain Lab, Berkeley, CA USA
[3] Univ Oxford, Oxford, England
[4] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[5] UCL, London, England
[6] Beijing Inst Gen AI, Beijing, Peoples R China
[7] Peking Univ, Inst AI, Beijing, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022) | 2022年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large sequence models (SM) such as GPT series and BERT have displayed outstanding performance and generalization capabilities in natural language process, vision and recently reinforcement learning. A natural follow-up question is how to abstract multi-agent decision making also as an sequence modeling problem and benefit from the prosperous development of the SMs. In this paper, we introduce a novel architecture named Multi-Agent Transformer (MAT) that effectively casts co-operative multi-agent reinforcement learning (MARL) into SM problems wherein the objective is to map agents' observation sequences to agents' optimal action sequences. Our goal is to build the bridge between MARL and SMs so that the modeling power of modern sequence models can be unleashed for MARL. Central to our MAT is an encoder-decoder architecture which leverages the multi-agent advantage decomposition theorem to transform the joint policy search problem into a sequential decision making process; this renders only linear time complexity for multi-agent problems and, most importantly, endows MAT with monotonic performance improvement guarantee. Unlike prior arts such as Decision Transformer fit only pre-collected offline data, MAT is trained by online trial and error from the environment in an on-policy fashion. To validate MAT, we conduct extensive experiments on StarCraftII, Multi-Agent MuJoCo, Dexterous Hands Manipulation, and Google Research Football benchmarks. Results demonstrate that MAT achieves superior performance and data efficiency compared to strong baselines including MAPPO and HAPPO. Furthermore, we demonstrate that MAT is an excellent few-short learner on unseen tasks regardless of changes in the number of agents. See our project page at https://sites.google.com/view/multi-agent-transformer((1)).
引用
收藏
页数:13
相关论文
共 46 条
[1]  
Alayrac Jean-Baptiste, 2022, Flamingo: a visual language model for few-shot learning
[2]  
Bommasani R, 2021, ARXIV
[3]  
Brown TB, 2020, ADV NEUR IN, V33
[4]  
Chang Y.H., 2003, Advances in neural information processing systems, V16
[5]   Low-Rank Autoregressive Tensor Completion for Spatiotemporal Traffic Data Imputation [J].
Chen, Xinyu ;
Lei, Mengying ;
Saunier, Nicolas ;
Sun, Lijun .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) :12301-12310
[6]  
Chen Y., 2022, Towards human-level bimanual dexterous manipulation with reinforcement learning
[7]  
de Witt Christian Schroder, 2020, Deep multi-agent reinforcement learning for decentralized continuous cooperative control
[8]  
Deng Xiaotie, 2021, COMPLEXITY COMPUTING
[9]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[10]  
Dosovitskiy A, 2020, ARXIV