MoBoo: Memory-Boosted Vision Transformer for Class-Incremental Learning

被引:2
作者
Ni, Bolin [1 ,2 ]
Nie, Xing [1 ,2 ]
Zhang, Chenghao [1 ,2 ]
Xu, Shixiong [1 ,2 ]
Zhang, Xin [3 ]
Meng, Gaofeng [1 ,2 ,4 ]
Xiang, Shiming [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] Beijing Inst Technol, Sch Informat & Elect, Radar Res Lab, Beijing 100081, Peoples R China
[4] HK Inst Sci & Innovat, CAS Ctr Artificial Intelligence & Robot, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Continual learning; class-incremental learning; vision transformer; image recognition;
D O I
10.1109/TCSVT.2024.3417431
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Continual learning strives to acquire knowledge across sequential tasks without forgetting previously assimilated knowledge. Current state-of-the-art methodologies utilize dynamic architectural strategies to increase the network capacity for new tasks. However, these approaches often suffer from a rapid growth in the number of parameters. While some methods introduce an additional network compression stage to address this, they tend to construct complex and hyperparameter-sensitive systems. In this work, we introduce a novel solution to this challenge by proposing Memory-Boosted transformer (MoBoo), instead of conventional architecture expansion and compression. Specifically, we design a memory-augmented attention mechanism by establishing a memory bank where the "key" and "value" linear projections are stored. This memory integration prompts the model to leverage previously learned knowledge, thereby enhancing stability during training at a marginal cost. The memory bank is lightweight and can be easily managed with a straightforward queue. Moreover, to increase the model's plasticity, we design a memory-attentive aggregator, which leverages the cross-attention mechanism to adaptively summarize the image representation from the encoder output that has historical knowledge involved. Extensive experiments on challenging benchmarks demonstrate the effectiveness of our method. For example, on ImageNet-100 under 10 tasks, our method outperforms the current state-of-the-art methods by +3.74% in average accuracy and using fewer parameters.
引用
收藏
页码:11169 / 11183
页数:15
相关论文
共 76 条
[41]   Adaptive Aggregation Networks for Class-Incremental Learning [J].
Liu, Yaoyao ;
Schiele, Bernt ;
Sun, Qianru .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :2544-2553
[42]  
Liu Z, 2021, Arxiv, DOI arXiv:2103.14030
[43]  
Loshchilov I., 2018, INT C LEARN REPR, P1
[44]  
McCloskey M., 1989, Psychology of Learning and Motivation, VVolume 24, P109
[45]   The stability-plasticity dilemma: investigating the continuum from catastrophic forgetting to age-limited learning effects [J].
Mermillod, Martial ;
Bugaiska, Aurelia ;
Bonin, Patrick .
FRONTIERS IN PSYCHOLOGY, 2013, 4
[46]  
Molchanov P, 2017, Arxiv, DOI arXiv:1611.06440
[47]   Enhancing Visual Continual Learning with Language-Guided Supervision [J].
Ni, Bolin ;
Zhao, Hongbo ;
Zhang, Chenghao ;
Hu, Ke ;
Meng, Gaofeng ;
Zhang, Zhaoxiang ;
Xiang, Shiming .
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, :24068-24077
[48]   Expanding Language-Image Pretrained Models for General Video Recognition [J].
Ni, Bolin ;
Peng, Houwen ;
Chen, Minghao ;
Zhang, Songyang ;
Meng, Gaofeng ;
Fu, Jianlong ;
Xiang, Shiming ;
Ling, Haibin .
COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 :1-18
[49]  
Niu SC, 2021, PR MACH LEARN RES, V139
[50]   Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning [J].
Ostapenko, Oleksiy ;
Puscas, Mihai ;
Klein, Tassilo ;
Jaehnichen, Patrick ;
Nabi, Moin .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11313-11321