Continual learning with selective nets

被引:0
作者
Luu, Hai Tung [1 ]
Szemenyei, Marton [1 ]
机构
[1] Budapest Univ Technol & Econ, Control Engn & Informat Technol, Budapest, Hungary
关键词
Continual learning; Computer vision; Image classification; Machine learning;
D O I
10.1007/s10489-025-06497-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread adoption of foundation models has significantly transformed machine learning, enabling even straightforward architectures to achieve results comparable to state-of-the-art methods. Inspired by the brain's natural learning process-where studying a new concept activates distinct neural pathways and recalling that memory requires a specific stimulus to fully recover the information-we present a novel approach to dynamic task identification and submodel selection in continual learning. Our method leverages the power of the learning robust visual features without supervision model (DINOv2) foundation model to handle multi-experience datasets by dividing them into multiple experiences, each representing a subset of classes. To build a memory of these classes, we employ strategies such as using random real images, distilled images, k-nearest neighbours (kNN) to identify the closest samples to each cluster, and support vector machines (SVM) to select the most representative samples. During testing, where the task identification (ID) is not provided, we extract features of the test image and use distance measurements to match it with the stored features. Additionally, we introduce a new forgetting metric specifically designed to measure the forgetting rate in task-agnostic continual learning scenarios, unlike traditional task-specific approaches. This metric captures the extent of knowledge loss across tasks where the task identity is unknown during inference. Despite its simple architecture, our method delivers competitive performance across various datasets, surpassing state-of-the-art results in certain instances.
引用
收藏
页数:15
相关论文
共 73 条
[1]   Conditional Channel Gated Networks for Task-Aware Continual Learning [J].
Abati, Davide ;
Tomczak, Jakub ;
Blankevoort, Tijmen ;
Calderara, Simone ;
Cucchiara, Rita ;
Bejnordi, Babak Ehteshami .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3930-3939
[2]  
Aich A., 2021, Elastic weight consolidation
[3]  
Aljundi R, 2019, ADV NEUR IN, V32
[4]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[5]  
Benjamin A., 2018, INT C LEARN REPR
[6]  
Buzzega P, 2020, Advances in neural information processing systems, P1
[7]   Rethinking Experience Replay: Bag of Tricks for Continual Learning [J].
Buzzega, Pietro ;
Boschini, Matteo ;
Porrello, Angelo ;
Calderara, Simone .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :2180-2187
[8]  
Cazenavette G, 2022, arXiv
[9]  
Cazenavette G, 2023, Arxiv, DOI arXiv:2305.01649
[10]  
Chaudhry A, 2020, PREPRINT