Continual learning with selective nets

被引:0
|
作者
Luu, Hai Tung [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Control Engn & Informat Technol, Budapest, Hungary
关键词
Continual learning; Computer vision; Image classification; Machine learning;
D O I
10.1007/s10489-025-06497-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread adoption of foundation models has significantly transformed machine learning, enabling even straightforward architectures to achieve results comparable to state-of-the-art methods. Inspired by the brain's natural learning process-where studying a new concept activates distinct neural pathways and recalling that memory requires a specific stimulus to fully recover the information-we present a novel approach to dynamic task identification and submodel selection in continual learning. Our method leverages the power of the learning robust visual features without supervision model (DINOv2) foundation model to handle multi-experience datasets by dividing them into multiple experiences, each representing a subset of classes. To build a memory of these classes, we employ strategies such as using random real images, distilled images, k-nearest neighbours (kNN) to identify the closest samples to each cluster, and support vector machines (SVM) to select the most representative samples. During testing, where the task identification (ID) is not provided, we extract features of the test image and use distance measurements to match it with the stored features. Additionally, we introduce a new forgetting metric specifically designed to measure the forgetting rate in task-agnostic continual learning scenarios, unlike traditional task-specific approaches. This metric captures the extent of knowledge loss across tasks where the task identity is unknown during inference. Despite its simple architecture, our method delivers competitive performance across various datasets, surpassing state-of-the-art results in certain instances.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Continual learning with selective netsContinual learning with selective netsH. T. Luu, M. Szemenyei
    Hai Tung Luu
    Marton Szemenyei
    Applied Intelligence, 2025, 55 (7)
  • [2] Progressive learning: A deep learning framework for continual learning
    Fayek, Haytham M.
    Cavedon, Lawrence
    Wu, Hong Ren
    NEURAL NETWORKS, 2020, 128 : 345 - 357
  • [3] Adaptive Progressive Continual Learning
    Xu, Ju
    Ma, Jin
    Gao, Xuesong
    Zhu, Zhanxing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) : 6715 - 6728
  • [4] From MNIST to ImageNet and back: benchmarking continual curriculum learning
    Faber, Kamil
    Zurek, Dominik
    Pietron, Marcin
    Japkowicz, Nathalie
    Vergari, Antonio
    Corizzo, Roberto
    MACHINE LEARNING, 2024, 113 (10) : 8137 - 8164
  • [5] Continual Learning With Knowledge Distillation: A Survey
    Li, Songze
    Su, Tonghua
    Zhang, Xuyao
    Wang, Zhongjie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [6] Hierarchical Correlations Replay for Continual Learning
    Wang, Qiang
    Liu, Jiayi
    Ji, Zhong
    Pang, Yanwei
    Zhang, Zhongfei
    KNOWLEDGE-BASED SYSTEMS, 2022, 250
  • [7] Continual learning with invertible generative models
    Pomponi, Jary
    Scardapane, Simone
    Uncini, Aurelio
    NEURAL NETWORKS, 2023, 164 : 606 - 616
  • [8] Continual compression model for online continual learning
    Ye, Fei
    Bors, Adrian G.
    APPLIED SOFT COMPUTING, 2024, 167
  • [9] Logarithmic Continual Learning
    Masarczyk, Wojciech
    Wawrzynski, Pawel
    Marczak, Daniel
    Deja, Kamil
    Trzcinski, Tomasz
    IEEE ACCESS, 2022, 10 : 117001 - 117010
  • [10] EXEMPLAR-FREE ONLINE CONTINUAL LEARNING
    He, Jiangpeng
    Zhu, Fengqing
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 541 - 545