Continual learning with invertible generative models

被引:0
|
作者
Pomponi, Jary [1 ]
Scardapane, Simone [1 ]
Uncini, Aurelio [1 ]
机构
[1] Sapienza Univ Rome, Dept Informat Engn Elect & Telecommun DIET, Rome, Italy
关键词
Machine learning; Continual learning; Normalizing flow; Catastrophic forgetting; NEURAL-NETWORKS;
D O I
10.1016/j.neunet.2023.05.020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Catastrophic forgetting (CF) happens whenever a neural network overwrites past knowledge while being trained on new tasks. Common techniques to handle CF include regularization of the weights (using, e.g., their importance on past tasks), and rehearsal strategies, where the network is constantly re-trained on past data. Generative models have also been applied for the latter, in order to have endless sources of data. In this paper, we propose a novel method that combines the strengths of regularization and generative-based rehearsal approaches. Our generative model consists of a normal-izing flow (NF), a probabilistic and invertible neural network, trained on the internal embeddings of the network. By keeping a single NF throughout the training process, we show that our memory overhead remains constant. In addition, exploiting the invertibility of the NF, we propose a simple approach to regularize the network's embeddings with respect to past tasks. We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:606 / 616
页数:11
相关论文
共 50 条
  • [1] Generative Models from the perspective of Continual Learning
    Lesort, Timothee
    Caselles-Dupre, Hugo
    Garcia-Ortiz, Michael
    Stoian, Andrei
    Filliat, David
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [2] FoCL: Feature-oriented continual learning for generative models
    Lao, Qicheng
    Mortazavi, Mehrzad
    Tahaei, Marzieh
    Dutil, Francis
    Fevens, Thomas
    Havaei, Mohammad
    PATTERN RECOGNITION, 2021, 120
  • [3] Generative Continual Concept Learning
    Rostami, Mohammad
    Kolouri, Soheil
    McClelland, James
    Pilly, Praveen
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5545 - 5552
  • [4] Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models
    Umer, Muhammad
    Polikar, Robi
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [5] Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
    Heng, Alvin
    Soh, Harold
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Continual Learning with Deep Generative Replay
    Shin, Hanul
    Lee, Jung Kwon
    Kim, Jaehong
    Kim, Jiwon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [7] Generative negative replay for continual learning
    Graffieti, Gabriele
    Maltoni, Davide
    Pellegrini, Lorenzo
    Lomonaco, Vincenzo
    NEURAL NETWORKS, 2023, 162 : 369 - 383
  • [8] UNSUPERVISED GENERATIVE VARIATIONAL CONTINUAL LEARNING
    Liu Guimeng
    Yang, Guo
    Yin, Cheryl Wong Sze
    Suganathan, Ponnuthurai Nagartnam
    Savitha, Ramasamy
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 4028 - 4032
  • [9] OvA-INN: Continual Learning with Invertible Neural Networks
    Hocquet, Guillaume
    Bichler, Olivier
    Querlioz, Damien
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [10] Deep Generative Replay With Denoising Diffusion Probabilistic Models for Continual Learning in Audio Classification
    Lee, Hyeon-Ju
    Buu, Seok-Jun
    IEEE ACCESS, 2024, 12 : 134714 - 134727