FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning

被引:50
作者
Petit, Gregoire [1 ,2 ]
Popescu, Adrian [1 ]
Schindler, Hugo [1 ]
Picard, David [2 ]
Delezoide, Bertrand [3 ]
机构
[1] Univ Paris Saclay, LIST, CEA, F-91120 Palaiseau, France
[2] Univ Gustave Eiffel, CNRS, Ecole Ponts, LIGM, Marne La Vallee, France
[3] Amanda, 34 Ave Champs Elysees, F-75008 Paris, France
来源
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2023年
基金
欧盟地平线“2020”;
关键词
D O I
10.1109/WACV56688.2023.00390
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Exemplar-free class-incremental learning is very challenging due to the negative effect of catastrophic forgetting. A balance between stability and plasticity of the incremental process is needed in order to obtain good accuracy for past as well as new classes. Existing exemplar-free class-incremental methods focus either on successive fine tuning of the model, thus favoring plasticity, or on using a feature extractor fixed after the initial incremental state, thus favoring stability. We introduce a method which combines a fixed feature extractor and a pseudo-features generator to improve the stability-plasticity balance. The generator uses a simple yet effective geometric translation of new class features to create representations of past classes, made of pseudo-features. The translation of features only requires the storage of the centroid representations of past classes to produce their pseudo-features. Actual features of new classes and pseudo-features of past classes are fed into a linear classifier which is trained incrementally to discriminate between all classes. The incremental process is much faster with the proposed method compared to mainstream ones which update the entire deep model. Experiments are performed with three challenging datasets, and different incremental settings. A comparison with ten existing methods shows that our method outperforms the others in most cases. FeTrIL code is available at https: //github.com/GregoirePetit/FeTrIL.
引用
收藏
页码:3900 / 3909
页数:10
相关论文
共 49 条
  • [1] [Anonymous], 2018, BRIT MACH VIS C 2018, DOI DOI 10.1145/3191442.3191464
  • [2] A comprehensive study of class incremental learning algorithms for visual tasks
    Belouadah, Eden
    Popescu, Adrian
    Kanellos, Ioannis
    [J]. NEURAL NETWORKS, 2021, 135 : 38 - 54
  • [3] Belouadah Eden, 2018, TASKCV WORKSH ECCV
  • [4] Bottou L., 2007, ADV NEURAL INFORM PR, V20
  • [5] End-to-End Incremental Learning
    Castro, Francisco M.
    Marin-Jimenez, Manuel J.
    Guil, Nicolas
    Schmid, Cordelia
    Alahari, Karteek
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 241 - 257
  • [6] A Survey on Deep Transfer Learning
    Tan, Chuanqi
    Sun, Fuchun
    Kong, Tao
    Zhang, Wenchang
    Yang, Chao
    Liu, Chunfang
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT III, 2018, 11141 : 270 - 279
  • [7] A Two-Stage Approach to Few-Shot Learning for Image Recognition
    Das, Debasmit
    Lee, C. S. George
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 3336 - 3350
  • [8] De Lange Matthias, 2019, CORR
  • [9] Dhamija Akshay Raj, 2021, ARXIV210207848
  • [10] Douillard Arthur, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12365), P86, DOI 10.1007/978-3-030-58565-5_6