Continual compression model for online continual learning

被引:0
|
作者
Ye, Fei [1 ]
Bors, Adrian G. [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu, Peoples R China
[2] Univ York, Dept Comp Sci, York YO10 5GH, England
关键词
Continual learning; Dynamic expansion model; Task-Free Continual Learning; Component pruning;
D O I
10.1016/j.asoc.2024.112427
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Task-Free Continual Learning (TFCL) presents a notably demanding but realistic ongoing learning concept, aiming to address catastrophic forgetting in sequential learning systems. In this paper, we tackle catastrophic forgetting by introducing an innovative dynamic expansion framework designed to adaptively enhance the model's capacity for novel data learning while also remembering the information learnt in the past, by using a minimal-size processing architecture. Our proposed framework incorporates three key mechanisms to mitigate model' forgetting: (1) by employing a Maximum Mean Discrepancy (MMD)-based expansion mechanism that assesses the disparity between previously acquired knowledge and that from the new training data, serving as a signal for the model's architecture expansion; (2) a component discarding mechanism that eliminates components characterized by redundant information, thereby optimizing the model size while fostering knowledge diversity; (3) a novel training sample selection strategy that leads to the diversity of the training data for each task. We conduct a series of TFCL experiments that demonstrate the superiority of the proposed framework over all baselines while utilizing fewer components than alternative dynamic expansion models. The results on the Split Mini ImageNet dataset, after splitting the original dataset into multiple tasks, are improved by more than 2% when compared to the closest baseline.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Sample Condensation in Online Continual Learning
    Sangermano, Mattia
    Carta, Antonio
    Cossu, Andrea
    Bacciu, Davide
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [2] Online continual learning with declarative memory
    Xiao, Zhe
    Du, Zhekai
    Wang, Ruijin
    Gan, Ruimeng
    Li, Jingjing
    NEURAL NETWORKS, 2023, 163 : 146 - 155
  • [3] Scalable Adversarial Online Continual Learning
    Dam, Tanmoy
    Pratama, Mahardhika
    Ferdaus, Meftahul
    Anavatti, Sreenatha
    Abbas, Hussein
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT III, 2023, 13715 : 373 - 389
  • [4] EXEMPLAR-FREE ONLINE CONTINUAL LEARNING
    He, Jiangpeng
    Zhu, Fengqing
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 541 - 545
  • [5] Online Continual Learning for Control of Mobile Robots
    Sarabakha, Andriy
    Qiao, Zhongzheng
    Ramasamy, Savitha
    Suganthan, Ponnuthurai Nagaratnam
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [6] Computationally Efficient Rehearsal for Online Continual Learning
    Davalas, Charalampos
    Michail, Dimitrios
    Diou, Christos
    Varlamis, Iraklis
    Tserpes, Konstantinos
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT III, 2022, 13233 : 39 - 49
  • [7] Adaptive Online Domain Incremental Continual Learning
    Gunasekara, Nuwan
    Gomes, Heitor
    Bifet, Albert
    Pfahringer, Bernhard
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I, 2022, 13529 : 491 - 502
  • [8] Online continual streaming learning for embedded space applications
    Mazouz, Alaa Eddine
    Nguyen, Van-Tam
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2024, 21 (03)
  • [9] Adaptive online continual multi-view learning
    Yu, Yang
    Du, Zhekai
    Meng, Lichao
    Li, Jingjing
    Hu, Jiang
    INFORMATION FUSION, 2024, 103
  • [10] Contrastive Correlation Preserving Replay for Online Continual Learning
    Yu, Da
    Zhang, Mingyi
    Li, Mantian
    Zha, Fusheng
    Zhang, Junge
    Sun, Lining
    Huang, Kaiqi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (01) : 124 - 139