REMIND Your Neural Network to Prevent Catastrophic Forgetting

被引:161
作者
Hayes, Tyler L. [1 ]
Kafle, Kushal [2 ]
Shrestha, Robik [1 ]
Acharya, Manoj [1 ]
Kanan, Christopher [1 ,3 ,4 ]
机构
[1] Rochester Inst Technol, Rochester, NY 14623 USA
[2] Adobe Res, San Jose, CA 95110 USA
[3] Paige, New York, NY 10036 USA
[4] Cornell Tech, New York, NY 10044 USA
来源
COMPUTER VISION - ECCV 2020, PT VIII | 2020年 / 12353卷
关键词
Online learning; Brain-inspired; Deep learning; MEMORY REPLAY; SLEEP; HIPPOCAMPUS; DATASETS;
D O I
10.1007/978-3-030-58598-3_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
People learn throughout life. However, incrementally updating conventional neural networks leads to catastrophic forgetting. A common remedy is replay, which is inspired by how the brain consolidates memory. Replay involves fine-tuning a network on a mixture of new and old instances. While there is neuroscientific evidence that the brain replays compressed memories, existing methods for convolutional networks replay raw images. Here, we propose REMIND, a brain-inspired approach that enables efficient replay with compressed representations. REMIND is trained in an online manner, meaning it learns one example at a time, which is closer to how humans learn. Under the same constraints, REMIND outperforms other methods for incremental class learning on the ImageNet ILSVRC-2012 dataset. We probe REMIND's robustness to data ordering schemes known to induce catastrophic forgetting. We demonstrate REMIND's generality by pioneering online learning for Visual Question Answering (VQA) (https://github.com/tyler-hayes/REMIND).
引用
收藏
页码:466 / 483
页数:18
相关论文
共 81 条
[1]  
Rusu AA, 2016, Arxiv, DOI [arXiv:1606.04671, DOI 10.48550/ARXIV.1606.04671, DOI 10.43550/ARXIV:1606.04671]
[2]   Memory retention - the synaptic stability versus plasticity dilemma [J].
Abraham, WC ;
Robins, A .
TRENDS IN NEUROSCIENCES, 2005, 28 (02) :73-78
[3]  
Acharya M, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P1955
[4]  
Aljundi R., 2019, ADV NEURAL INFORM PR, V32, P11849, DOI DOI 10.1109/CVPR.2019.01151
[5]  
Aljundi R, 2019, ADV NEUR IN, V32
[6]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[7]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[8]   Neural Module Networks [J].
Andreas, Jacob ;
Rohrbach, Marcus ;
Darrell, Trevor ;
Klein, Dan .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :39-48
[9]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[10]   Slow-Wave Sleep-Imposed Replay Modulates Both Strength and Precision of Memory [J].
Barnes, Dylan C. ;
Wilson, Donald A. .
JOURNAL OF NEUROSCIENCE, 2014, 34 (15) :5134-5142