Latent Coreset Sampling based Data-Free Continual Learning

被引:3
|
作者
Wang, Zhuoyi [1 ]
Li, Dingcheng [1 ]
Li, Ping [1 ]
机构
[1] Baidu Res, Cognit Comp Lab, Bellevue, WA 98004 USA
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022 | 2022年
关键词
Continual Learning; Data-free; Coreset Sampling; Latent representation;
D O I
10.1145/3511808.3557375
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Catastrophic forgetting poses a major challenge in continual learning where the old knowledge is forgotten when the model is updated on new tasks. Existing solutions tend to solve this challenge through generative models or exemplar-replay strategies. However, such methods may not alleviate the issue that the low-quality samples are generated or selected for the replay, which would directly reduce the effectiveness of the model, especially in the class imbalance, noise, or redundancy scenarios. Accordingly, how to select a suitable coreset during continual learning becomes significant in such setting. In this work, we propose a novel approach that leverages continual coreset sampling (CCS) to address these challenges. We aim to select the most representative subsets during each iteration. When the model is trained on new tasks, it closely approximates/matches the gradient of both the previous and current tasks with respect to the model parameters. This way, adaptation of the model to new datasets could be more efficient. Furthermore, different from the old data storage for maintaining the old knowledge, our approach choose to preserving them in the latent space. We augment the previous classes in the embedding space as the pseudo sample vectors from the old encoder output, strengthened by the joint training with selected new data. It could avoid data privacy invasions in a real-world application when we update the model. Our experiments validate the effectiveness of our proposed approach over various CV/NLP datasets under against current baselines, and we also indicate the obvious improvement of model adaptation and forgetting reduction in a data-free manner.
引用
收藏
页码:2078 / 2087
页数:10
相关论文
共 50 条
  • [21] Kullback-Leibler Reservoir Sampling for Fairness in Continual Learning
    Nikoloutsopoulos, Sotirios
    Koutsopoulos, Iordanis
    Titsias, Michalis K.
    2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024, 2024, : 460 - 466
  • [22] Dual discriminator adversarial distillation for data-free model compression
    Zhao, Haoran
    Sun, Xin
    Dong, Junyu
    Manic, Milos
    Zhou, Huiyu
    Yu, Hui
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (05) : 1213 - 1230
  • [23] Dual discriminator adversarial distillation for data-free model compression
    Haoran Zhao
    Xin Sun
    Junyu Dong
    Milos Manic
    Huiyu Zhou
    Hui Yu
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 1213 - 1230
  • [24] A TinyML Platform for On-Device Continual Learning With Quantized Latent Replays
    Ravaglia, Leonardo
    Rusci, Manuele
    Nadalini, Davide
    Capotondi, Alessandro
    Conti, Francesco
    Benini, Luca
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2021, 11 (04) : 789 - 802
  • [25] Continual Horizontal Federated Learning for Heterogeneous Data
    Mori, Junki
    Teranishi, Isamu
    Furukawa, Ryo
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [26] Continual Learning with Diffusion-based Generative Replay for Industrial Streaming Data
    He, Jiayi
    Chen, Jiao
    Liu, Qianmiao
    Dai, Suyan
    Tang, Jianhua
    Liu, Dongpo
    2024 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC, 2024,
  • [27] A mechanics-based data-free Problem Independent Machine Learning (PIML) model for large-scale structural analysis and design optimization
    Huang, Mengcheng
    Liu, Chang
    Guo, Yilin
    Zhang, Linfeng
    Du, Zongliang
    Guo, Xu
    JOURNAL OF THE MECHANICS AND PHYSICS OF SOLIDS, 2024, 193
  • [28] A Stable and Efficient Data-Free Model Attack With Label-Noise Data Generation
    Zhang, Zhixuan
    Zheng, Xingjian
    Qing, Linbo
    Liu, Qi
    Wang, Pingyu
    Liu, Yu
    Liao, Jiyang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3131 - 3145
  • [29] Max-Margin Deep Diverse Latent Dirichlet Allocation With Continual Learning
    Chen, Wenchao
    Chen, Bo
    Liu, Yingqi
    Cao, Xuefei
    Zhao, Qianru
    Zhang, Hao
    Tian, Long
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (07) : 5639 - 5653
  • [30] Continual Learning Based on Knowledge Distillation and Representation Learning
    Chen, Xiu-Yan
    Liu, Jian-Wei
    Li, Wen-Tao
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 27 - 38