机构:
Monash Univ, Clayton, Vic, AustraliaMonash Univ, Clayton, Vic, Australia
Hu, Siyi
[1
]
Zhu, Fengda
论文数: 0引用数: 0
h-index: 0
机构:
Monash Univ, Clayton, Vic, AustraliaMonash Univ, Clayton, Vic, Australia
Zhu, Fengda
[1
]
Chang, Xiaojun
论文数: 0引用数: 0
h-index: 0
机构:
Monash Univ, Clayton, Vic, AustraliaMonash Univ, Clayton, Vic, Australia
Chang, Xiaojun
[1
]
Liang, Xiaodan
论文数: 0引用数: 0
h-index: 0
机构:
Sun Yat Sen Univ, Guangzhou, Peoples R China
Dark Matter AI Inc, Guangzhou, Peoples R ChinaMonash Univ, Clayton, Vic, Australia
Liang, Xiaodan
[2
,3
]
机构:
[1] Monash Univ, Clayton, Vic, Australia
[2] Sun Yat Sen Univ, Guangzhou, Peoples R China
[3] Dark Matter AI Inc, Guangzhou, Peoples R China
来源:
2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW)
|
2021年
关键词:
Multi-agent System;
Transfer Learning;
Zero-shot Generalization;
D O I:
10.1109/ICMEW53276.2021.9455984
中图分类号:
TP39 [计算机的应用];
学科分类号:
081203 ;
0835 ;
摘要:
We present a Transformer-like agent to learn the policy of multi-agent cooperation tasks, which is a breakthrough for traditional RNN-based multi-agent models that need to be retrained for each task. Our model can handle various input and output with strong transferability and can parallel tackle different tasks. Besides, We are the fast to successfully utilize transformer into a recurrent architecture, providing insight on stabilizing transformers in recurrent RL tasks.