PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning

被引:447
作者
Douillard, Arthur [1 ,2 ]
Cord, Matthieu [2 ,3 ]
Ollion, Charles [1 ]
Robert, Thomas [1 ]
Valle, Eduardo [4 ]
机构
[1] Heuritech, Paris, France
[2] Sorbonne Univ, Paris, France
[3] Valeo, Paris, France
[4] Univ Estadual Campinas, Campinas, Brazil
来源
COMPUTER VISION - ECCV 2020, PT XX | 2020年 / 12365卷
基金
巴西圣保罗研究基金会;
关键词
Incremental-learning; Representation-learning pooling;
D O I
10.1007/978-3-030-58565-5_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning. In this work, we propose PODNet, a model inspired by representation learning. By carefully balancing the compromise between remembering the old classes and learning new ones, PODNet fights catastrophic forgetting, even over very long runs of small incremental tasks - a setting so far unexplored by current works. PODNet innovates on existing art with an efficient spatial-based distillation-loss applied throughout the model and a representation comprising multiple proxy vectors for each class. We validate those innovations thoroughly, comparing PODNet with three state-of-the-art models on three datasets: CIFAR100, ImageNet100, and ImageNet1000. Our results showcase a significant advantage of PODNet over existing art, with accuracy gains of 12.10, 6.51, and 2.85 percentage points, respectively.
引用
收藏
页码:86 / 102
页数:17
相关论文
共 37 条
[1]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[2]  
Allen KR, 2019, PR MACH LEARN RES, V97
[3]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[4]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[5]   Learning without Memorizing [J].
Dhar, Prithviraj ;
Singh, Rajat Vikram ;
Peng, Kuan-Chuan ;
Wu, Ziyan ;
Chellappa, Rama .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5133-5141
[6]  
Fernando Chrisantha, 2017, arXiv preprint arXiv:1701.08734
[7]   Catastrophic forgetting in connectionist networks [J].
French, RM .
TRENDS IN COGNITIVE SCIENCES, 1999, 3 (04) :128-135
[8]   Dynamic Few-Shot Visual Learning without Forgetting [J].
Gidaris, Spyros ;
Komodakis, Nikos .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4367-4375
[9]  
Goldberger J., 2004, Adv. Neural Inf. Process. Syst., V17
[10]  
Golkar S., 2019, ADV NEURAL INFORM PR