State-based episodic memory for multi-agent reinforcement learning

被引:5
作者
Ma, Xiao [1 ]
Li, Wu-Jun [1 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Dept Comp Sci & Technol, 168 Xianlin Ave, Nanjing 210023, Jiangsu, Peoples R China
基金
国家重点研发计划;
关键词
Multi-agent; Reinforcement learning; Episodic memory; Sample efficiency;
D O I
10.1007/s10994-023-06365-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-agent reinforcement learning (MARL) algorithms have made promising progress in recent years by leveraging the centralized training and decentralized execution (CTDE) paradigm. However, existing MARL algorithms still suffer from the sample inefficiency problem. In this paper, we propose a simple yet effective approach, called state-based episodic memory (SEM), to improve sample efficiency in MARL. SEM adopts episodic memory (EM) to supervise the centralized training procedure of CTDE in MARL. To the best of our knowledge, SEM is the first work to introduce EM into MARL. SEM has lower space complexity and time complexity than state and action based EM (SAEM) initially proposed for single-agent reinforcement learning when using for MARL. Experimental results on two synthetic environments and one real environment show that introducing episodic memory into MARL can improve sample efficiency, and SEM can reduce storage cost and time cost compared with SAEM.
引用
收藏
页码:5163 / 5190
页数:28
相关论文
共 45 条
[1]  
Amarjyoti S, 2017, Arxiv, DOI arXiv:1701.08878
[2]  
Andersen P., 2009, HIPPOCAMPUS BOOK OXF, DOI DOI 10.1093/ACPROF:OSO/9780195100273.001.0001
[3]  
[Anonymous], 2000, P 17 INT C MACH LEAR
[4]  
Badia A. P., 2020, ICML
[5]   MULTIDIMENSIONAL BINARY SEARCH TREES USED FOR ASSOCIATIVE SEARCHING [J].
BENTLEY, JL .
COMMUNICATIONS OF THE ACM, 1975, 18 (09) :509-517
[6]  
Berner Christopher, 2019, arXiv
[7]  
Blundell C, 2016, Arxiv, DOI arXiv:1606.04460
[8]   An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination [J].
Cao, Yongcan ;
Yu, Wenwu ;
Ren, Wei ;
Chen, Guanrong .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2013, 9 (01) :427-438
[9]  
Duan Y, 2016, PR MACH LEARN RES, V48
[10]  
Foerster JN, 2018, AAAI CONF ARTIF INTE, P2974