Replay in Deep Learning: Current Approaches and Missing Biological Elements

被引:62
作者
Hayes, Tyler L. [1 ]
Krishnan, Giri P. [2 ]
Bazhenov, Maxim [2 ]
Siegelmann, Hava T. [3 ]
Sejnowski, Terrence J. [2 ,4 ]
Kanan, Christopher [1 ,5 ,6 ]
机构
[1] Rochester Inst Technol, Rochester, NY 14623 USA
[2] Univ Calif San Diego, La Jolla, CA 92093 USA
[3] Univ Massachusetts, Amherst, MA 01023 USA
[4] Salk Inst Biol Studies, La Jolla, CA 92037 USA
[5] Paige, New York, NY 10036 USA
[6] Cornell Tech, New York, NY 10044 USA
基金
美国国家科学基金会;
关键词
HIPPOCAMPAL PLACE CELLS; SLOW-WAVE SLEEP; REM-SLEEP; MEMORY CONSOLIDATION; EMOTIONAL MEMORY; EPISODIC MEMORY; CONNECTIONIST MODELS; ADULT NEUROGENESIS; RECOGNITION MEMORY; PREFRONTAL CORTEX;
D O I
10.1162/neco_a_01433
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks.
引用
收藏
页码:2908 / 2950
页数:43
相关论文
共 268 条
[41]   Modeling the Background for Incremental Learning in Semantic Segmentation [J].
Cermelli, Fabio ;
Mancini, Massimiliano ;
Bulo, Samuel Rota ;
Ricci, Elisa ;
Caputo, Barbara .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :9230-9239
[42]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[43]  
Chaudhry Arslan, 2019, 7 INT C LEARN REPR I
[44]   Sleep Oscillations in the Thalamocortical System Induce Long-Term Neuronal Plasticity [J].
Chauvette, Sylvain ;
Seigneur, Josee ;
Timofeev, Igor .
NEURON, 2012, 75 (06) :1105-1113
[45]   Toward a Brain-Inspired System: Deep Recurrent Reinforcement Learning for a Simulated Self-Driving Agent [J].
Chen, Jieneng ;
Chen, Jingye ;
Zhang, Ruiming ;
Hu, Xiaobin .
FRONTIERS IN NEUROROBOTICS, 2019, 13
[46]  
Chrysakis A., 2020, P MACHINE LEARNING S, P8303
[47]   Off-line learning of motor skill memory: A double dissociation of goal and movement [J].
Cohen, DA ;
Pascual-Leone, A ;
Press, DZ ;
Robertson, EM .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2005, 102 (50) :18237-18241
[48]  
COHN D, 1994, MACH LEARN, V15, P201, DOI 10.1007/BF00993277
[49]   THE FUNCTION OF DREAM SLEEP [J].
CRICK, F ;
MITCHISON, G .
NATURE, 1983, 304 (5922) :111-114
[50]  
Culotta A., 2005, AAAI 2005, V5, P746