Continual learning with selective nets

被引:0
|
作者
Luu, Hai Tung [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Control Engn & Informat Technol, Budapest, Hungary
关键词
Continual learning; Computer vision; Image classification; Machine learning;
D O I
10.1007/s10489-025-06497-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread adoption of foundation models has significantly transformed machine learning, enabling even straightforward architectures to achieve results comparable to state-of-the-art methods. Inspired by the brain's natural learning process-where studying a new concept activates distinct neural pathways and recalling that memory requires a specific stimulus to fully recover the information-we present a novel approach to dynamic task identification and submodel selection in continual learning. Our method leverages the power of the learning robust visual features without supervision model (DINOv2) foundation model to handle multi-experience datasets by dividing them into multiple experiences, each representing a subset of classes. To build a memory of these classes, we employ strategies such as using random real images, distilled images, k-nearest neighbours (kNN) to identify the closest samples to each cluster, and support vector machines (SVM) to select the most representative samples. During testing, where the task identification (ID) is not provided, we extract features of the test image and use distance measurements to match it with the stored features. Additionally, we introduce a new forgetting metric specifically designed to measure the forgetting rate in task-agnostic continual learning scenarios, unlike traditional task-specific approaches. This metric captures the extent of knowledge loss across tasks where the task identity is unknown during inference. Despite its simple architecture, our method delivers competitive performance across various datasets, surpassing state-of-the-art results in certain instances.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] EsaCL: An Efficient Continual Learning Algorithm
    Ren, Weijieying
    Honavar, Vasant G.
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 163 - 171
  • [42] CLEO: Continual Learning of Evolving Ontologies
    Muralidhara, Shishir
    Bukhari, Saqib
    Schneider, Georg
    Stricker, Didier
    Schuster, Rene
    COMPUTER VISION - ECCV 2024, PT LIV, 2025, 15112 : 328 - 344
  • [43] CUCL: Codebook for Unsupervised Continual Learning
    Chen, Cheng
    Song, Jingkuan
    Zhu, Xiaosu
    Zhu, Junchen
    Gao, Lianli
    Shen, Hengtao
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 1729 - 1737
  • [44] Extensible Steganalysis via Continual Learning
    Zhou, Zhili
    Yin, Zihao
    Meng, Ruohan
    Peng, Fei
    FRACTAL AND FRACTIONAL, 2022, 6 (12)
  • [45] Sample Condensation in Online Continual Learning
    Sangermano, Mattia
    Carta, Antonio
    Cossu, Andrea
    Bacciu, Davide
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [46] Continual Learning with Neuron Activation Importance
    Kim, Sohee
    Lee, Seungkyu
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT I, 2022, 13231 : 310 - 321
  • [47] Continual Learning Based on Knowledge Distillation and Representation Learning
    Chen, Xiu-Yan
    Liu, Jian-Wei
    Li, Wen-Tao
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 27 - 38
  • [48] Machine Unlearning by Reversing the Continual Learning
    Zhang, Yongjing
    Lu, Zhaobo
    Zhang, Feng
    Wang, Hao
    Li, Shaojing
    APPLIED SCIENCES-BASEL, 2023, 13 (16):
  • [49] Dirichlet Prior Networks for Continual Learning
    Wiewel, Felix
    Bartler, Alexander
    Yang, Bin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [50] Online continual learning with declarative memory
    Xiao, Zhe
    Du, Zhekai
    Wang, Ruijin
    Gan, Ruimeng
    Li, Jingjing
    NEURAL NETWORKS, 2023, 163 : 146 - 155