Adaptive Decoupled Prompting for Class Incremental Learning

被引:0
作者
Zhang, Fanhao [1 ]
Wang, Shiye [1 ]
Li, Changsheng [1 ]
Yuan, Ye [1 ]
Wang, Guoren [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PT IX, PRCV 2024 | 2025年 / 15039卷
关键词
prompt learning; incremental learning; catastrophic forgetting; SYSTEMS;
D O I
10.1007/978-981-97-8692-3_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning has garnered significant interest due to its practicality in enabling deep models to incrementally incorporate new tasks of different classes without forgetting in a rapidly evolving world. The prompt-based methods, due to their ability to effective instruct pre-trained model to different tasks with few learnable prompt pool, have been the prevailing approaches on this line. However, prompt pool-based methods constrain the coarse information within group-level prompts, thereby not fully leveraging the more detailed information present in individual samples themselves. To address this, we propose an adaptive decoupled prompting method for class incremental learning. Specifically, we design an adaptive prompt generator to generate the specific prompt for each image of each task, so as to obtain the knowledge at the instance level. Moreover, we claim that there exists relevant information among different tasks, thus we further decompose the prompt to capture the knowledge shared across multiple tasks. Experimental evaluations on four datasets demonstrate the effectiveness of the proposed Dual-AP(Adaptive Decoupled Prompting for Class Incremental Learning) in comparison to the related class-incremental learning methods.
引用
收藏
页码:554 / 568
页数:15
相关论文
共 28 条
  • [1] Churamani N., 2022, IEEE Trans. Affect. Comput.
  • [2] Deng J., 2022, Incremental prototype prompt-tuning with pre-trained representation for class incremental learning
  • [3] Dosovitskiy A., 2021, ARXIV, P1, DOI 10.48550/ARXIV.2010.11929
  • [4] Douillard A., 2020, Lecture Notes in Computer Science, V12365
  • [5] Feng KT, 2023, Arxiv, DOI arXiv:2303.15015
  • [6] Adaptive Online Domain Incremental Continual Learning
    Gunasekara, Nuwan
    Gomes, Heitor
    Bifet, Albert
    Pfahringer, Bernhard
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I, 2022, 13529 : 491 - 502
  • [7] Helber P., 2017, IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens.
  • [8] Joseph K.J., 2024, 2024 IEEE CVF WINT C
  • [9] Overcoming catastrophic forgetting in neural networks
    Kirkpatricka, James
    Pascanu, Razvan
    Rabinowitz, Neil
    Veness, Joel
    Desjardins, Guillaume
    Rusu, Andrei A.
    Milan, Kieran
    Quan, John
    Ramalho, Tiago
    Grabska-Barwinska, Agnieszka
    Hassabis, Demis
    Clopath, Claudia
    Kumaran, Dharshan
    Hadsell, Raia
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) : 3521 - 3526
  • [10] Krizhevsky A., 2009, University of Toronto