Self-Net: Lifelong Learning via Continual Self-Modeling

被引:8
作者
Mandivarapu, Jaya Krishna [1 ]
Camp, Blake [1 ]
Estrada, Rolando [1 ]
机构
[1] Georgia State Univ, Dept Comp Sci, Atlanta, GA 30303 USA
来源
FRONTIERS IN ARTIFICIAL INTELLIGENCE | 2020年 / 3卷
关键词
deep learning; continual learning; autoencoders; manifold learning; catastrophic forgetting;
D O I
10.3389/frai.2020.00019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning a set of tasks over time, also known as continual learning (CL), is one of the most challenging problems in artificial intelligence. While recent approaches achieve some degree of CL in deep neural networks, they either (1) store a new network (or an equivalent number of parameters) for each new task, (2) store training data from previous tasks, or (3) restrict the network's ability to learn new tasks. To address these issues, we propose a novel framework, Self-Net, that uses an autoencoder to learn a set of low-dimensional representations of the weights learned for different tasks. We demonstrate that these low-dimensional vectors can then be used to generate high-fidelity recollections of the original weights. Self-Net can incorporate new tasks over time with little retraining, minimal loss in performance for older tasks, and without storing prior training data. We show that our technique achieves over 10X storage compression in a continual fashion, and that it outperforms state-of-the-art approaches on numerous datasets, including continual versions of MNIST, CIFAR10, CIFAR100, Atari, and task-incremental CORe50. To the best of our knowledge, we are the first to use autoencoders to sequentially encode sets of network weights to enable continual learning.
引用
收藏
页数:14
相关论文
共 41 条
[1]   Expert Gate: Lifelong Learning with a Network of Experts [J].
Aljundi, Rahaf ;
Chakravarty, Punarjay ;
Tuytelaars, Tinne .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :7120-7129
[2]  
[Anonymous], 2017, ARXIV170509847
[3]  
Blundell C, 2015, PR MACH LEARN RES, V37, P1613
[4]   Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval [J].
Carr, Margaret F. ;
Jadhav, Shantanu P. ;
Frank, Loren M. .
NATURE NEUROSCIENCE, 2011, 14 (02) :147-153
[5]  
Doersch C., 2016, ARXIV
[6]  
Finn C, 2019, PR MACH LEARN RES, V97
[7]  
Goodfellow I.J., 2013, trophic forgetting in neural networks
[8]  
Greydanus S, 2017, BABY A3C
[9]  
He Xu, 2019, ARXIV190605201
[10]   Note on the quadratic penalties in elastic weight consolidation [J].
Huszar, Ferenc .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2018, 115 (11) :E2496-E2497