Learning without Memorizing

被引:350
作者
Dhar, Prithviraj [1 ]
Singh, Rajat Vikram [2 ]
Peng, Kuan-Chuan [2 ]
Wu, Ziyan [2 ]
Chellappa, Rama [1 ]
机构
[1] Univ Maryland, College Pk, MD 20742 USA
[2] Siemens Corp Technol, Princeton, NJ USA
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
关键词
D O I
10.1109/CVPR.2019.00528
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while teaching the classifier to learn new classes. However this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on edge devices with limited memory. Hence, we propose a novel approach,called 'Learning without Memorizing (LwM)', to preserve the information about existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss (L-AD), and demonstrate that penalizing the changes in classifiers' attention maps helps to retain information of the base classes, as new classes are added. We show that adding L-AD to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.
引用
收藏
页码:5133 / 5141
页数:9
相关论文
共 19 条
[1]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[2]  
Aljundi Rahaf, 2018, ARXIV180605421
[3]  
[Anonymous], 2018, AAAI
[4]  
Baweja C., 2018, MED IM MEETS NIPS WO
[5]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[6]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448
[7]  
He K., 2016, CVPR, DOI [10.1109/CVPR.2016.90, DOI 10.1109/CVPR.2016.90]
[8]   Overcoming catastrophic forgetting in neural networks [J].
Kirkpatricka, James ;
Pascanu, Razvan ;
Rabinowitz, Neil ;
Veness, Joel ;
Desjardins, Guillaume ;
Rusu, Andrei A. ;
Milan, Kieran ;
Quan, John ;
Ramalho, Tiago ;
Grabska-Barwinska, Agnieszka ;
Hassabis, Demis ;
Clopath, Claudia ;
Kumaran, Dharshan ;
Hadsell, Raia .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) :3521-3526
[9]  
Lee SW, 2017, ADV NEUR IN, V30
[10]   One-shot learning of object categories [J].
Li, FF ;
Fergus, R ;
Perona, P .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (04) :594-611