GopGAN: Gradients Orthogonal Projection Generative Adversarial Network With Continual Learning

被引:0
作者
Li, Xiaobin [1 ]
Wang, Weiqiang [1 ]
机构
[1] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 101408, Peoples R China
关键词
Task analysis; Training; Generators; Generative adversarial networks; Knowledge engineering; Semantics; Iterative algorithms; Catastrophic forgetting; continual learning; generative adversarial networks (GANs); orthogonal projection matrix;
D O I
10.1109/TNNLS.2021.3093319
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The generative adversarial networks (GANs) in continual learning suffer from catastrophic forgetting. In continual learning, GANs tend to forget about previous generation tasks and only remember the tasks they just learned. In this article, we present a novel conditional GAN, called the gradients orthogonal projection GAN (GopGAN), which updates the weights in the orthogonal subspace of the space spanned by the representations of training examples, and we also mathematically demonstrate its ability to retain the old knowledge about learned tasks in learning a new task. Furthermore, the orthogonal projection matrix for modulating gradients is mathematically derived and its iterative calculation algorithm for continual learning is given so that training examples for learned tasks do not need to be stored when learning a new task. In addition, a task-dependent latent vector construction is presented and the constructed conditional latent vectors are used as the inputs of generator in GopGAN to avoid the disappearance of orthogonal subspace of learned tasks. Extensive experiments on MNIST, EMNIST, SVHN, CIFAR10, and ImageNet-200 generation tasks show that the proposed GopGAN can effectively cope with the issue of catastrophic forgetting and stably retain learned knowledge.
引用
收藏
页码:215 / 227
页数:13
相关论文
共 42 条
[1]  
Arjovsky M., 2017, Towards principled methods for training generative adversarial networks, DOI DOI 10.48550/ARXIV.1701.04862
[2]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[3]  
Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
[4]   Catastrophic forgetting in connectionist networks [J].
French, RM .
TRENDS IN COGNITIVE SCIENCES, 1999, 3 (04) :128-135
[5]  
Golub G.H., 2012, MATRIX COMPUTATIONS, V4
[6]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[7]   Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1026-1034
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]  
He X., 2018, 2018 IEEE International Magnetics Conference (INTERMAG), DOI 10.1109/INTMAG.2018.8508202
[10]  
Heusel M, 2017, ADV NEUR IN, V30