DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion

被引:114
作者
Douillard, Arthur [1 ,2 ]
Rame, Alexandre [1 ]
Couairon, Guillaume [1 ,3 ]
Cord, Matthieu [1 ,4 ]
机构
[1] Orbonne Univ, Paris, France
[2] Heuritech, Paris, France
[3] Meta AI, Menlo Pk, CA USA
[4] Valeoai, Paris, France
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.00907
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep network architectures struggle to continually learn new tasks without forgetting the previous tasks. A recent trend indicates that dynamic architectures based on an expansion of the parameters can reduce catastrophic forgetting efficiently in continual learning. However, existing approaches often require a task identifier at test-time, need complex tuning to balance the growing number of parameters, and barely share any information across tasks. As a result, they struggle to scale to a large number of tasks without significant overhead. In this paper, we propose a transformer architecture based on a dedicated encoder/decoder framework. Critically, the encoder and decoder are shared among all tasks. Through a dynamic expansion of special tokens, we specialize each forward of our decoder network on a task distribution. Our strategy scales to a large number of tasks while having negligible memory and time overheads due to strict control of the expansion of the parameters. Moreover, this efficient strategy doesn't need any hyperparameter tuning to control the network's expansion. Our model reaches excellent results on CIFAR100 and state-of-the-art performances on the large-scale ImageNet100 and ImageNet1000 while having fewer parameters than concurrent dynamic frameworks.(1)
引用
收藏
页码:9275 / 9285
页数:11
相关论文
共 69 条
  • [1] Memory Aware Synapses: Learning What (not) to Forget
    Aljundi, Rahaf
    Babiloni, Francesca
    Elhoseiny, Mohamed
    Rohrbach, Marcus
    Tuytelaars, Tinne
    [J]. COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 : 144 - 161
  • [2] [Anonymous], 2001, ADV NEURAL INFORM PR
  • [3] Ba Jimmy Lei, 2016, LAYER NORMALIZATION, DOI 10.48550/arXiv.1607.06450
  • [4] Belouadah Eden, 2019, P IEEE INT C COMP VI, P4
  • [5] End-to-End Incremental Learning
    Castro, Francisco M.
    Marin-Jimenez, Manuel J.
    Guil, Nicolas
    Schmid, Cordelia
    Alahari, Karteek
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 241 - 257
  • [6] Modeling the Background for Incremental Learning in Semantic Segmentation
    Cermelli, Fabio
    Mancini, Massimiliano
    Bulo, Samuel Rota
    Ricci, Elisa
    Caputo, Barbara
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 9230 - 9239
  • [7] Chaudhry A., 2019, ICLR
  • [8] Chaudhry Arslan, 2018, P IEEE EUR C COMP VI, P2
  • [9] Chaudhry Arslan, 2019, INT C MACH LEARN ICM, P3
  • [10] Collier Mark Patrick, 2020, ICMLWS, P2