Continual learning with selective nets

被引:0
|
作者
Luu, Hai Tung [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Control Engn & Informat Technol, Budapest, Hungary
关键词
Continual learning; Computer vision; Image classification; Machine learning;
D O I
10.1007/s10489-025-06497-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread adoption of foundation models has significantly transformed machine learning, enabling even straightforward architectures to achieve results comparable to state-of-the-art methods. Inspired by the brain's natural learning process-where studying a new concept activates distinct neural pathways and recalling that memory requires a specific stimulus to fully recover the information-we present a novel approach to dynamic task identification and submodel selection in continual learning. Our method leverages the power of the learning robust visual features without supervision model (DINOv2) foundation model to handle multi-experience datasets by dividing them into multiple experiences, each representing a subset of classes. To build a memory of these classes, we employ strategies such as using random real images, distilled images, k-nearest neighbours (kNN) to identify the closest samples to each cluster, and support vector machines (SVM) to select the most representative samples. During testing, where the task identification (ID) is not provided, we extract features of the test image and use distance measurements to match it with the stored features. Additionally, we introduce a new forgetting metric specifically designed to measure the forgetting rate in task-agnostic continual learning scenarios, unlike traditional task-specific approaches. This metric captures the extent of knowledge loss across tasks where the task identity is unknown during inference. Despite its simple architecture, our method delivers competitive performance across various datasets, surpassing state-of-the-art results in certain instances.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Continual Information Cascade Learning
    Zhou, Fan
    Jing, Xin
    Xu, Xovee
    Zhong, Ting
    Trajcevski, Goce
    Wu, Jin
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [22] Continual learning in the presence of repetition
    Hemati, Hamed
    Pellegrini, Lorenzo
    Duan, Xiaotian
    Zhao, Zixuan
    Xia, Fangfang
    Masana, Marc
    Tscheschner, Benedikt
    Veas, Eduardo
    Zheng, Yuxiang
    Zhao, Shiji
    Li, Shao-Yuan
    Huang, Sheng-Jun
    Lomonaco, Vincenzo
    van de Ven, Gido M.
    NEURAL NETWORKS, 2025, 183
  • [23] CONTINUAL LEARNING IN VISION TRANSFORMER
    Takeda, Mana
    Yanai, Keiji
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 616 - 620
  • [24] Continual Learning in Predictive Autoscaling
    Hao, Hongyan
    Chu, Zhixuan
    Zhu, Shiyi
    Jiang, Gangwei
    Wang, Yan
    Jiang, Caigao
    Zhang, James Y.
    Jiang, Wei
    Xue, Siqiao
    Zhou, Jun
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4616 - 4622
  • [25] Dynamic learning rates for continual unsupervised learning
    David Fernandez-Rodriguez, Jose
    Jose Palomo, Esteban
    Miguel Ortiz-De-Lazcano-Lobato, Juan
    Ramos-Jimenez, Gonzalo
    Lopez-Rubio, Ezequiel
    INTEGRATED COMPUTER-AIDED ENGINEERING, 2023, 30 (03) : 257 - 273
  • [26] Poster: Continual Network Learning
    Di Cicco, Nicola
    Al Sadi, Amir
    Grasselli, Chiara
    Melis, Andrea
    Antichi, Gianni
    Tornatore, Massimo
    PROCEEDINGS OF THE 2023 ACM SIGCOMM 2023 CONFERENCE, SIGCOMM 2023, 2023, : 1096 - 1098
  • [27] Continual Learning with Dual Regularizations
    Han, Xuejun
    Guo, Yuhong
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 619 - 634
  • [28] Drifting explanations in continual learning
    Cossu, Andrea
    Spinnato, Francesco
    Guidotti, Riccardo
    Bacciu, Davide
    NEUROCOMPUTING, 2024, 597
  • [29] Continual Representation Learning for Images with Variational Continual Auto-Encoder
    Jeon, Ik Hwan
    Shin, Soo Young
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 367 - 373
  • [30] ConCS: A Continual Classifier System for Continual Learning of Multiple Boolean Problems
    Nguyen, Trung B.
    Browne, Will N.
    Zhang, Mengjie
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2023, 27 (04) : 1057 - 1071