Sequential memory improves sample and memory efficiency in episodic control

被引:1
作者
Freire, Ismael T. [1 ,3 ]
Amil, Adrian F. [1 ]
Verschure, Paul F. M. J. [2 ]
机构
[1] Radboud Univ Nijmegen, Donders Inst Brain Cognit & Behav, Ctr Neurosci DCN FNWI, Nijmegen, Netherlands
[2] Univ Miguel Hernandez Elche, Alicante Inst Neurosci, Dept Hlth Psychol, Elche, Spain
[3] Sorbonne Univ, Inst Intelligent Syst & Robot, Paris, France
基金
欧盟地平线“2020”;
关键词
HIPPOCAMPUS; CELLS; OSCILLATIONS; PREFERENCE; MECHANISM; DECISIONS; LEVEL;
D O I
10.1038/s42256-024-00950-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning algorithms are known for their sample inefficiency, requiring extensive episodes to reach optimal performance. Episodic reinforcement learning algorithms aim to overcome this issue by using extended memory systems to leverage past experiences. However, these memory augmentations are often used as mere buffers, from which isolated events are resampled for offline learning (for example, replay). In this Article, we introduce Sequential Episodic Control (SEC), a hippocampal-inspired model that stores entire event sequences in their temporal order and employs a sequential bias in their retrieval to guide actions. We evaluate SEC across various benchmarks from the Animal-AI testbed, demonstrating its superior performance and sample efficiency compared to several state-of-the-art models, including Model-Free Episodic Control, Deep Q-Network and Episodic Reinforcement Learning with Associative Memory. Our experiments show that SEC achieves higher rewards and faster policy convergence in tasks requiring memory and decision-making. Additionally, we investigate the effects of memory constraints and forgetting mechanisms, revealing that prioritized forgetting enhances both performance and policy stability. Further, ablation studies demonstrate the critical role of the sequential memory component in SEC. Finally, we discuss how fast, sequential hippocampal-like episodic memory systems could support both habit formation and deliberation in artificial and biological systems.
引用
收藏
页码:43 / 55
页数:17
相关论文
共 79 条
[1]   Autonomous agents modelling other agents: A comprehensive survey and open problems [J].
Albrecht, Stefano V. ;
Stone, Peter .
ARTIFICIAL INTELLIGENCE, 2018, 258 :66-95
[2]   Theta oscillations optimize a speed-precision trade-off in phase coding neurons [J].
Amil, Adrian F. ;
Albesa-Gonzalez, Albert ;
Verschure, Paul F. M. J. .
PLOS COMPUTATIONAL BIOLOGY, 2024, 20 (12)
[3]  
Amil AF, 2024, Arxiv, DOI arXiv:2405.14600
[4]   Supercritical dynamics at the edge-of-chaos underlies optimal decision-making [J].
Amil, Adrian F. ;
Verschure, Paul F. M. J. .
JOURNAL OF PHYSICS-COMPLEXITY, 2021, 2 (04)
[5]  
Baker B., 2020, INT C LEARN REPR
[6]  
Berner Christopher, 2019, arXiv
[7]  
Beyret B, 2019, Arxiv, DOI arXiv:1909.07483
[8]  
Blundell C, 2016, Arxiv, DOI arXiv:1606.04460
[9]   Reinstated episodic context guides sampling-based decisions for reward [J].
Bornstein, Aaron M. ;
Norman, Kenneth A. .
NATURE NEUROSCIENCE, 2017, 20 (07) :997-+
[10]   Reinforcement Learning, Fast and Slow [J].
Botvinick, Matthew ;
Ritter, Sam ;
Wang, Jane X. ;
Kurth-Nelson, Zeb ;
Blundell, Charles ;
Hassabis, Demis .
TRENDS IN COGNITIVE SCIENCES, 2019, 23 (05) :408-422