KABI: Class-Incremental Learning via knowledge Amalgamation and Batch Identification

被引:2
作者
Li, Caixia [1 ]
Xu, Wenhua [1 ]
Si, Xizhu [1 ]
Song, Ping [1 ]
机构
[1] Ocean Univ China, Coll Informat Sci & Engn, Qingdao, Peoples R China
来源
2021 5TH INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE (ICIAI 2021) | 2021年
关键词
Incremental learning; Deep Convolutional Neural Networks; Catastrophic forgetting; Knowledge distillation;
D O I
10.1145/3461353.3461367
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In class-incremental learning setting, classes are typically presented batch by batch over time. Incremental learning often suffers from catastrophic forgetting: the performance on previous classes abruptly degrades when adapting a model to new classes. We find that incremental models trained using knowledge distillation are skilled at discriminating classes within a batch, whereas they have confusion among classes in different batches. We propose a class-incremental learning approach with knowledge amalgamation and batch identification (KABI), which can effectively alleviate catastrophic forgetting. The idea is to first train an expert model for new classes at current state, and then train an amalgamation model by amalgamating knowledge from the expert model and the amalgamation model of previous state to discriminate different classes within a batch, and particularly train a batch identifier to discriminate different batches. We conduct extensive experiments on three datasets: MNIST, CIFAR-100, ILSVRC 2012, and show that KABI outperforms the second-best approach by 1.29%, 14.26% and 21.51% respectively. Surprisingly, classification accuracies of our approach are even sometimes higher than the oracle results which is obtained by training a model using all training samples from all classes.
引用
收藏
页码:170 / 176
页数:7
相关论文
共 25 条
[1]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[2]   Expert Gate: Lifelong Learning with a Network of Experts [J].
Aljundi, Rahaf ;
Chakravarty, Punarjay ;
Tuytelaars, Tinne .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :7120-7129
[3]   IL2M: Class Incremental Learning With Dual Memory [J].
Belouadah, Eden ;
Popescu, Adrian .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :583-592
[4]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[5]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[6]   Learning without Memorizing [J].
Dhar, Prithviraj ;
Singh, Rajat Vikram ;
Peng, Kuan-Chuan ;
Wu, Ziyan ;
Chellappa, Rama .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5133-5141
[7]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[8]  
Hinton Geoffrey, 2015, ARXIV
[9]   Lifelong Learning via Progressive Distillation and Retrospection [J].
Hou, Saihui ;
Pan, Xinyu ;
Loy, Chen Change ;
Wang, Zilei ;
Lin, Dahua .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :452-467
[10]   Learning a Unified Classifier Incrementally via Rebalancing [J].
Hou, Saihui ;
Pan, Xinyu ;
Loy, Chen Change ;
Wang, Zilei ;
Lin, Dahua .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :831-839