TRANSFORMER BASED MULTI-AGENT FRAMEWORK

被引:0
作者
Hu, Siyi [1 ]
Zhu, Fengda [1 ]
Chang, Xiaojun [1 ]
Liang, Xiaodan [2 ,3 ]
机构
[1] Monash Univ, Clayton, Vic, Australia
[2] Sun Yat Sen Univ, Guangzhou, Peoples R China
[3] Dark Matter AI Inc, Guangzhou, Peoples R China
来源
2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW) | 2021年
关键词
Multi-agent System; Transfer Learning; Zero-shot Generalization;
D O I
10.1109/ICMEW53276.2021.9455984
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We present a Transformer-like agent to learn the policy of multi-agent cooperation tasks, which is a breakthrough for traditional RNN-based multi-agent models that need to be retrained for each task. Our model can handle various input and output with strong transferability and can parallel tackle different tasks. Besides, We are the fast to successfully utilize transformer into a recurrent architecture, providing insight on stabilizing transformers in recurrent RL tasks.
引用
收藏
页数:2
相关论文
共 11 条
  • [1] Du YL, 2019, ADV NEUR IN, V32
  • [2] Kang Wan Ju, 2019, P 31 INT C MACHINE L
  • [3] Mahajan A, 2019, ADV NEUR IN, V32
  • [4] Peng P, 2017, Arxiv, DOI arXiv:1703.10069
  • [5] Rashid T, 2018, PR MACH LEARN RES, V80
  • [6] StarCraft Micromanagement With Reinforcement Learning and Curriculum Transfer Learning
    Shao, Kun
    Zhu, Yuanheng
    Zha, Dongbin
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2019, 3 (01): : 73 - 84
  • [7] Sunehag P, 2017, Arxiv, DOI arXiv:1706.05296
  • [8] Wang WX, 2020, AAAI CONF ARTIF INTE, V34, P7293
  • [9] Yang YD, 2018, Arxiv, DOI arXiv:1709.04511
  • [10] Yang Yaodong, 2020, ARXIV