End-to-End Incremental Learning

被引:822
作者
Castro, Francisco M. [1 ]
Marin-Jimenez, Manuel J. [2 ]
Guil, Nicolas [1 ]
Schmid, Cordelia [3 ]
Alahari, Karteek [3 ]
机构
[1] Univ Malaga, Dept Comp Architecture, Malaga, Spain
[2] Univ Cordoba, Dept Comp & Numer Anal, Cordoba, Spain
[3] Univ Grenoble Alpes, INRIA, CNRS, Grenoble INP,LJK, F-38000 Grenoble, France
来源
COMPUTER VISION - ECCV 2018, PT XII | 2018年 / 11216卷
关键词
Incremental learning; CNN; Distillation loss; Image classification; MEMORY;
D O I
10.1007/978-3-030-01258-8_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model-a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.
引用
收藏
页码:241 / 257
页数:17
相关论文
共 36 条
[1]  
[Anonymous], 1994, COGN SCI SOC C
[2]  
[Anonymous], Computer Science
[3]   Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting [J].
Ans, B ;
Rousset, S ;
French, RM ;
Musca, S .
CONNECTION SCIENCE, 2004, 16 (02) :71-99
[4]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[5]  
Cauwenberghs G, 2001, ADV NEUR IN, V13, P409
[6]   NEIL: Extracting Visual Knowledge from Web Data [J].
Chen, Xinlei ;
Shrivastava, Abhinav ;
Gupta, Abhinav .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :1409-1416
[7]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[8]   Learning Everything about Anything: Webly-Supervised Visual Concept Learning [J].
Divvala, Santosh K. ;
Farhadi, Ali ;
Guestrin, Carlos .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :3270-3277
[9]  
Furlanello T., 2016, ARXIV PREPRINT ARXIV
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778