IL2M: Class Incremental Learning With Dual Memory

被引:232
作者
Belouadah, Eden [1 ]
Popescu, Adrian [1 ]
机构
[1] CEA, LIST, F-91191 Gif Sur Yvette, France
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
关键词
D O I
10.1109/ICCV.2019.00067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a class incremental learning (IL) method which exploits fine tuning and a dual memory to reduce the negative effect of catastrophic forgetting in image recognition. First, we simplify the current fine tuning based approaches which use a combination of classification and distillation losses to compensate for the limited availability of past data. We find that the distillation term actually hurts performance when a memory is allowed. Then, we modify the usual class IL memory component. Similar to existing works, a first memory stores exemplar images of past classes. A second memory is introduced here to store past class statistics obtained when they were initially learned. The intuition here is that classes are best modeled when all their data are available and that their initial statistics are useful across different incremental states. A prediction bias towards newly learned classes appears during inference because the dataset is imbalanced in their favor. The challenge is to make predictions of new and past classes more comparable. To do this, scores of past classes are rectified by leveraging contents from both memories. The method has negligible added cost, both in terms of memory and of inference complexity. Experiments with three large public datasets show that the proposed approach is more effective than a range of competitive state-of-the-art methods.
引用
收藏
页码:583 / 592
页数:10
相关论文
共 27 条
[11]  
Kemker R., 2018, P 6 INT C LEARN REPR
[12]  
Kornblith S., 2018, CoRR
[13]   Learning Without Forgetting [J].
Li, Zhizhong ;
Hoiem, Derek .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :614-629
[14]   Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights [J].
Mallya, Arun ;
Davis, Dillon ;
Lazebnik, Svetlana .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :72-88
[15]   PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning [J].
Mallya, Arun ;
Lazebnik, Svetlana .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7765-7773
[16]  
McCloskey M., 1989, Psychology of Learning and Motivation, V24, P109
[17]   Distance-Based Image Classification: Generalizing to New Classes at Near-Zero Cost [J].
Mensink, Thomas ;
Verbeek, Jakob ;
Perronnin, Florent ;
Csurka, Gabriela .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2624-2637
[18]   Large-Scale Image Retrieval with Attentive Deep Local Features [J].
Noh, Hyeonwoo ;
Araujo, Andre ;
Sim, Jack ;
Weyand, Tobias ;
Han, Bohyung .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3476-3485
[19]   Continual lifelong learning with neural networks: A review [J].
Parisi, German I. ;
Kemker, Ronald ;
Part, Jose L. ;
Kanan, Christopher ;
Wermter, Stefan .
NEURAL NETWORKS, 2019, 113 :54-71
[20]  
Rabinowitz Neil C, 2016, ARXIV