CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning

被引:64
作者
Smith, James Seale [1 ,2 ]
Karlinsky, Leonid [2 ,4 ]
Gutta, Vyshnavi [1 ]
Cascante-Bonilla, Paola [2 ,3 ]
Kim, Donghyun [2 ,4 ]
Arbelle, Assaf [4 ]
Panda, Rameswar [2 ,4 ]
Feris, Rogerio [2 ,4 ]
Kira, Zsolt [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] MIT, IBM Watson AI Lab, Cambridge, MA 02139 USA
[3] Rice Univ, Houston, TX USA
[4] IBM Res, Armonk, NY USA
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.01146
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously seen data, which increases memory costs and may violate data privacy. Recently, the emergence of large-scale pre-trained vision transformer models has enabled prompting approaches as an alternative to data-rehearsal. These approaches rely on a key-query mechanism to generate prompts and have been found to be highly resistant to catastrophic forgetting in the well-established rehearsal-free continual learning setting. However, the key mechanism of these methods is not trained end-to-end with the task sequence. Our experiments show that this leads to a reduction in their plasticity, hence sacrificing new task accuracy, and inability to benefit from expanded parameter capacity. We instead propose to learn a set of prompt components which are assembled with input-conditioned weights to produce input-conditioned prompts, resulting in a novel attention-based end-to-end key-query scheme. Our experiments show that we outperform the current SOTA method DualPrompt on established benchmarks by as much as 4.5% in average final accuracy. We also outperform the state of art by as much as 4.4% accuracy on a continual learning benchmark which contains both class-incremental and domain-incremental task shifts, corresponding to many practical settings. Our code is available at https://github.com/GT-RIPL/CODA-Prompt
引用
收藏
页码:11909 / 11919
页数:11
相关论文
共 71 条
  • [31] Krizhevsky A, 2009, Technical report
  • [32] Lacoste A., 2019, Quantifying the carbon emissions of machine learning
  • [33] Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild
    Lee, Kibok
    Lee, Kimin
    Shin, Jinwoo
    Lee, Honglak
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 312 - 321
  • [34] Lee Soochan, 2020, ARXIV200100689, P2
  • [35] Deep unsupervised state representation learning with robotic priors: a robustness analysis
    Lesort, Timothee
    Seurin, Mathieu
    Li, Xinrui
    Diaz-Rodriguez, Natalia
    Filliat, David
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [36] Li Duo, 2022, ARXIV220104924
  • [37] Learning without Forgetting
    Li, Zhizhong
    Hoiem, Derek
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) : 2935 - 2947
  • [38] Lin H, 2020, LANGUAGE MODELS ARE, V33, P1877, DOI DOI 10.48550/ARXIV.2005.14165
  • [39] Lomonaco V., 2017, P 1 ANN C ROB LEARN, P17, DOI DOI 10.48550/ARXIV.1705.03550
  • [40] Rehearsal-Free Continual Learning over Small Non-IID Batches
    Lomonaco, Vincenzo
    Maltoni, Davide
    Pellegrini, Lorenzo
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 989 - 998