Continual and One-Shot Learning Through Neural Networks with Dynamic External Memory

被引:13
|
作者
Luders, Benno [1 ]
Schlager, Mikkel [1 ]
Korach, Aleksandra [1 ]
Risi, Sebastian [1 ]
机构
[1] IT Univ Copenhagen, Copenhagen, Denmark
来源
APPLICATIONS OF EVOLUTIONARY COMPUTATION, EVOAPPLICATIONS 2017, PT I | 2017年 / 10199卷
关键词
Neural Turing Machine; Continual learning; Adaptive neural networks; Plasticity; Memory; Neuroevolution;
D O I
10.1007/978-3-319-55849-3_57
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Training neural networks to quickly learn new skills without forgetting previously learned skills is an important open challenge in machine learning. A common problem for adaptive networks that can learn during their lifetime is that the weights encoding a particular task are often overridden when a new task is learned. This paper takes a step in overcoming this limitation by building on the recently proposed Evolving Neural Turing Machine (ENTM) approach. In the ENTM, neural networks are augmented with an external memory component that they can write to and read from, which allows them to store associations quickly and over long periods of time. The results in this paper demonstrate that the ENTM is able to perform one-shot learning in reinforcement learning tasks without catastrophic forgetting of previously stored associations. Additionally, we introduce a new ENTM default jump mechanism that makes it easier to find unused memory location and therefor facilitates the evolution of continual learning networks. Our results suggest that augmenting evolving networks with an external memory component is not only a viable mechanism for adaptive behaviors in neuroevolution but also allows these networks to perform continual and one-shot learning at the same time.
引用
收藏
页码:886 / 901
页数:16
相关论文
共 50 条
  • [31] Continual learning-based trajectory prediction with memory augmented networks
    Yang, Biao
    Fan, Fucheng
    Ni, Rongrong
    Li, Jie
    Kiong, Loochu
    Liu, Xiaofeng
    KNOWLEDGE-BASED SYSTEMS, 2022, 258
  • [32] Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations
    Ororbia, Alexander
    Mali, Ankur
    Giles, C. Lee
    Kifer, Daniel
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 4267 - 4278
  • [33] Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning
    Shen, Jiangrong
    Ni, Wenyao
    Xu, Qi
    Tang, Huajin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 1, 2024, : 611 - 619
  • [34] SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with Continual and Unsupervised Learning Capabilities in Dynamic Environments
    Putra, Rachmad Vidya Wicaksana
    Shafique, Muhammad
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 1057 - 1062
  • [35] Continual learning with attentive recurrent neural networks for temporal data classification
    Yin, Shao-Yu
    Huang, Yu
    Chang, Tien-Yu
    Chang, Shih-Fang
    Tseng, Vincent S.
    NEURAL NETWORKS, 2023, 158 : 171 - 187
  • [36] Targeted Data Poisoning Attacks Against Continual Learning Neural Networks
    Li, Huayu
    Ditzler, Gregory
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [37] Continual Learning with Deep Neural Networks in Physiological Signal Data: A Survey
    Li, Ao
    Li, Huayu
    Yuan, Geng
    HEALTHCARE, 2024, 12 (02)
  • [38] A single fast Hebbian-like process enabling one-shot class addition in deep neural networks without backbone modification
    Hosoda, Kazufumi
    Nishida, Keigo
    Seno, Shigeto
    Mashita, Tomohiro
    Kashioka, Hideki
    Ohzawa, Izumi
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [39] Visual one-shot learning as an 'anti-camouflage device': a novel morphing paradigm
    Ishikawa, Tetsuo
    Mogi, Ken
    COGNITIVE NEURODYNAMICS, 2011, 5 (03) : 231 - 239
  • [40] Mind wandering at encoding, but not at retrieval, disrupts one-shot stimulus-control learning
    Whitehead, Peter S.
    Mahmoud, Younis
    Seli, Paul
    Egner, Tobias
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2021, 83 (07) : 2968 - 2982