R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning

被引:45
作者
Gao, Qiankun [1 ]
Zhao, Chen [2 ]
Ghanem, Bernard [2 ]
Zhang, Jian [1 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Shenzhen, Peoples R China
[2] King Abdullah Univ Sci & Technol, Thuwal, Saudi Arabia
来源
COMPUTER VISION, ECCV 2022, PT XXIII | 2022年 / 13683卷
关键词
Incremental learning; Data-free; Representation learning;
D O I
10.1007/978-3-031-20050-2_25
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Class-Incremental Learning (CIL) struggles with catastrophic forgetting when learning new knowledge, and Data-Free CIL (DFCIL) is even more challenging without access to the training data of previously learned classes. Though recent DFCIL works introduce techniques such as model inversion to synthesize data for previous classes, they fail to overcome forgetting due to the severe domain gap between the synthetic and real data. To address this issue, this paper proposes relation-guided representation learning (RRL) for DFCIL, dubbed RDFCIL. In RRL, we introduce relational knowledge distillation to flexibly transfer the structural relation of new data from the old model to the current model. Our RRL-boosted DFCIL can guide the current model to learn representations of new classes better compatible with representations of previous classes, which greatly reduces forgetting while improving plasticity. To avoid the mutual interference between representation and classifier learning, we employ local rather than global classification loss during RRL. After RRL, the classification head is refined with global class-balanced classification loss to address the data imbalance issue as well as learn the decision boundaries between new and previous classes. Extensive experiments on CIFAR100, Tiny-ImageNet200, and ImageNet100 demonstrate that our R-DFCIL significantly surpasses previous approaches and achieves a new state-of-the-art performance for DFCIL. Code is available at https://github.com/jianzhangcs/R- DFCIL.
引用
收藏
页码:423 / 439
页数:17
相关论文
共 32 条
[1]   Rainbow Memory: Continual Learning with a Memory of Diverse Samples [J].
Bang, Jihwan ;
Kim, Heesu ;
Yoo, YoungJoon ;
Ha, Jung-Woo ;
Choi, Jonghyun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8214-8223
[2]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[3]  
Cong Y., 2020, P ADV NEURAL INFORM
[4]   PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning [J].
Douillard, Arthur ;
Cord, Matthieu ;
Ollion, Charles ;
Robert, Thomas ;
Valle, Eduardo .
COMPUTER VISION - ECCV 2020, PT XX, 2020, 12365 :86-102
[5]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[6]  
Hinton G., 2015, arXiv
[7]   Learning a Unified Classifier Incrementally via Rebalancing [J].
Hou, Saihui ;
Pan, Xinyu ;
Loy, Chen Change ;
Wang, Zilei ;
Lin, Dahua .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :831-839
[8]  
Goodfellow IJ, 2015, Arxiv, DOI [arXiv:1312.6211, DOI 10.48550/ARXIV.1312.6211]
[9]  
Kemker R., 2018, P INT C LEARNING REP
[10]  
Krizhevsky A., 2012, Handbook Syst. Autoimmune Dis