Generative Models from the perspective of Continual Learning

被引:16
|
作者
Lesort, Timothee [1 ,2 ,3 ]
Caselles-Dupre, Hugo [1 ,2 ,4 ]
Garcia-Ortiz, Michael [4 ]
Stoian, Andrei [3 ]
Filliat, David [1 ,2 ]
机构
[1] ENSTA ParisTech, Flowers Team, Palaiseau, France
[2] INRIA, Paris, France
[3] Thales, Theresis Laboratory, La Defens, France
[4] Softbank Robot Europe, Paris, France
来源
2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2019年
关键词
D O I
10.1109/ijcnn.2019.8851986
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Which generative model is the most suitable for Continual Learning? This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks. We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning. We used two quantitative metrics to estimate the generation quality and memory ability. We experiment with sequential tasks on three commonly used benchmarks for Continual Learning (MNIST, Fashion MNIST and CIFAR10). We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods. Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge. Our code is available online(1).
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Continual learning with invertible generative models
    Pomponi, Jary
    Scardapane, Simone
    Uncini, Aurelio
    NEURAL NETWORKS, 2023, 164 : 606 - 616
  • [2] FoCL: Feature-oriented continual learning for generative models
    Lao, Qicheng
    Mortazavi, Mehrzad
    Tahaei, Marzieh
    Dutil, Francis
    Fevens, Thomas
    Havaei, Mohammad
    PATTERN RECOGNITION, 2021, 120
  • [3] Generative Continual Concept Learning
    Rostami, Mohammad
    Kolouri, Soheil
    McClelland, James
    Pilly, Praveen
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5545 - 5552
  • [4] Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models
    Umer, Muhammad
    Polikar, Robi
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [5] Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
    Heng, Alvin
    Soh, Harold
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Continual Learning with Deep Generative Replay
    Shin, Hanul
    Lee, Jung Kwon
    Kim, Jaehong
    Kim, Jiwon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [7] Generative negative replay for continual learning
    Graffieti, Gabriele
    Maltoni, Davide
    Pellegrini, Lorenzo
    Lomonaco, Vincenzo
    NEURAL NETWORKS, 2023, 162 : 369 - 383
  • [8] UNSUPERVISED GENERATIVE VARIATIONAL CONTINUAL LEARNING
    Liu Guimeng
    Yang, Guo
    Yin, Cheryl Wong Sze
    Suganathan, Ponnuthurai Nagartnam
    Savitha, Ramasamy
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 4028 - 4032
  • [9] Continual Learning of Generative Models With Limited Data: From Wasserstein-1 Barycenter to Adaptive Coalescence
    Dedeoglu, Mehmet
    Lin, Sen
    Zhang, Zhaofeng
    Zhang, Junshan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 12042 - 12056
  • [10] Deep Generative Replay With Denoising Diffusion Probabilistic Models for Continual Learning in Audio Classification
    Lee, Hyeon-Ju
    Buu, Seok-Jun
    IEEE ACCESS, 2024, 12 : 134714 - 134727