Latent Coreset Sampling based Data-Free Continual Learning

被引:3
|
作者
Wang, Zhuoyi [1 ]
Li, Dingcheng [1 ]
Li, Ping [1 ]
机构
[1] Baidu Res, Cognit Comp Lab, Bellevue, WA 98004 USA
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022 | 2022年
关键词
Continual Learning; Data-free; Coreset Sampling; Latent representation;
D O I
10.1145/3511808.3557375
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Catastrophic forgetting poses a major challenge in continual learning where the old knowledge is forgotten when the model is updated on new tasks. Existing solutions tend to solve this challenge through generative models or exemplar-replay strategies. However, such methods may not alleviate the issue that the low-quality samples are generated or selected for the replay, which would directly reduce the effectiveness of the model, especially in the class imbalance, noise, or redundancy scenarios. Accordingly, how to select a suitable coreset during continual learning becomes significant in such setting. In this work, we propose a novel approach that leverages continual coreset sampling (CCS) to address these challenges. We aim to select the most representative subsets during each iteration. When the model is trained on new tasks, it closely approximates/matches the gradient of both the previous and current tasks with respect to the model parameters. This way, adaptation of the model to new datasets could be more efficient. Furthermore, different from the old data storage for maintaining the old knowledge, our approach choose to preserving them in the latent space. We augment the previous classes in the embedding space as the pseudo sample vectors from the old encoder output, strengthened by the joint training with selected new data. It could avoid data privacy invasions in a real-world application when we update the model. Our experiments validate the effectiveness of our proposed approach over various CV/NLP datasets under against current baselines, and we also indicate the obvious improvement of model adaptation and forgetting reduction in a data-free manner.
引用
收藏
页码:2078 / 2087
页数:10
相关论文
共 50 条
  • [31] Continual learning classification method with new labeled data based on the artificial immune system
    Li, Dong
    Liu, Shulin
    Gao, Furong
    Sun, Xin
    APPLIED SOFT COMPUTING, 2020, 94
  • [32] DATA POISONING ATTACK AIMING THE VULNERABILITY OF CONTINUAL LEARNING
    Han, Gyojin
    Choi, Jaehyun
    Hong, Hyeong Gwon
    Kim, Junmo
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1905 - 1909
  • [33] Data Composition for Continual Learning in Application of Cyberattack Detection
    Lian, Jiayi
    Liu, Xueying
    Choi, Kevin
    Veeramani, Balaji
    Murli, Sathvik
    Hu, Alison
    Freeman, Laura
    Bowen, Edward
    Deng, Xinwei
    SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2024, PT IV, 2025, 15214 : 137 - 153
  • [34] DualPrompt: Complementary Prompting for Rehearsal-Free Continual Learning
    Wang, Zifeng
    Zhang, Zizhao
    Ebrahimi, Sayna
    Sun, Ruoxi
    Zhang, Han
    Lee, Chen-Yu
    Ren, Xiaoqi
    Su, Guolong
    Perot, Vincent
    Dy, Jennifer
    Pfister, Tomas
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 631 - 648
  • [35] Continual Representation Learning via Auto-Weighted Latent Embeddings on Person ReID
    Huang, Tianjun
    Qu, Weiwei
    Zhang, Jianguo
    PATTERN RECOGNITION AND COMPUTER VISION,, PT III, 2021, 13021 : 593 - 605
  • [36] Factors influencing the acceptance and use of a South African data-free job search application
    Mangadi, Tsholofelo
    Petersen, Fazlyn
    SOUTH AFRICAN JOURNAL OF INFORMATION MANAGEMENT, 2024, 26 (01):
  • [37] CONTINUAL LEARNING USING LATTICE-FREE MMI FOR SPEECH RECOGNITION
    Hadian, Hossein
    Gorin, Arseniy
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6522 - 6526
  • [38] Continual Learning for Behavior-based Driver Identification
    Fanan, Mattia
    Dalle Pezze, Davide
    Efatinasab, Emad
    Carli, Ruggero
    Rampazzo, Mirco
    Susto, Gian Antonio
    Engineering Applications of Artificial Intelligence, 2025, 150
  • [39] Reusable generator data-free knowledge distillation with hard loss simulation for image classification
    Sun, Yafeng
    Wang, Xingwang
    Huang, Junhong
    Chen, Shilin
    Hou, Minghui
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 265
  • [40] Data-Free Ensemble Knowledge Distillation for Privacy-conscious Multimedia Model Compression
    Hao, Zhiwei
    Luo, Yong
    Hu, Han
    An, Jianping
    Wen, Yonggang
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1803 - 1811