Adaptive Progressive Continual Learning

被引:18
作者
Xu, Ju [1 ]
Ma, Jin [1 ]
Gao, Xuesong [2 ,3 ,4 ]
Zhu, Zhanxing [5 ]
机构
[1] Peking Univ, Ctr Data Sci, Beijing 100871, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
[3] Hisense Co Ltd, State Key Lab Digital Multimedia Technol, Qingdao 266071, Shandong, Peoples R China
[4] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266510, Shandong, Peoples R China
[5] Beijing Inst Big Data Res, Beijing 100124, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Optimization; Bayes methods; Training; Reinforcement learning; Knowledge engineering; Complexity theory; Machine learning; adaptive progressive network framework; continual learning; Bayesian optimization; reinforcement learning; neural networks;
D O I
10.1109/TPAMI.2021.3095064
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning paradigm learns from a continuous stream of tasks in an incremental manner and aims to overcome the notorious issue: the catastrophic forgetting. In this work, we propose a new adaptive progressive network framework including two models for continual learning: Reinforced Continual Learning (RCL) and Bayesian Optimized Continual Learning with Attention mechanism (BOCL) to solve this fundamental issue. The core idea of this framework is to dynamically and adaptively expand the neural network structure upon the arrival of new tasks. RCL and BOCL employ reinforcement learning and Bayesian optimization to achieve it, respectively. An outstanding advantage of our proposed framework is that it will not forget the knowledge that has been learned through adaptively controlling the architecture. We propose effective ways of employing the learned knowledge in the two methods to control the size of the network. RCL employs previous knowledge directly while BOCL selectively utilizes previous knowledge (e.g., feature maps of previous tasks) via attention mechanism. The experiments on variants of MNIST, CIFAR-100 and Sequence of 5-Datasets demonstrate that our methods outperform the state-of-the-art in preventing catastrophic forgetting and fitting new tasks better under the same or less computing resource.
引用
收藏
页码:6715 / 6728
页数:14
相关论文
共 50 条
[31]   Learning to Navigate for Mobile Robot with Continual Reinforcement Learning [J].
Wang, Ning ;
Zhang, Dingyuan ;
Wang, Yong .
PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, :3701-3706
[32]   Continual learning with invertible generative models [J].
Pomponi, Jary ;
Scardapane, Simone ;
Uncini, Aurelio .
NEURAL NETWORKS, 2023, 164 :606-616
[33]   Continual Reinforcement Learning for Intelligent Agricultural Management under Climate Changes [J].
Wang, Zhaoan ;
Jha, Kishlay ;
Xiao, Shaoping .
CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 81 (01) :1319-1336
[34]   Adaptive online continual multi-view learning [J].
Yu, Yang ;
Du, Zhekai ;
Meng, Lichao ;
Li, Jingjing ;
Hu, Jiang .
INFORMATION FUSION, 2024, 103
[35]   CLeaR: An adaptive continual learning framework for regression tasks [J].
Yujiang He ;
Bernhard Sick .
AI Perspectives, 3 (1)
[36]   GopGAN: Gradients Orthogonal Projection Generative Adversarial Network With Continual Learning [J].
Li, Xiaobin ;
Wang, Weiqiang .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) :215-227
[37]   A continual learning framework for adaptive defect classification and inspection [J].
Sun, Wenbo ;
Al Kontar, Raed ;
Jin, Judy ;
Chang, Tzyy-Shuh .
JOURNAL OF QUALITY TECHNOLOGY, 2023, 55 (05) :598-614
[38]   Adaptive instance similarity embedding for online continual learning [J].
Han, Ya-nan ;
Liu, Jian-wei .
PATTERN RECOGNITION, 2024, 149
[39]   Mitigating Catastrophic Forgetting in Robot Continual Learning: A Guided Policy Search Approach Enhanced With Memory-Aware Synapses [J].
Dong, Qingwei ;
Zeng, Peng ;
He, Yunpeng ;
Wan, Guangxi ;
Dong, Xiaoting .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (12) :11242-11249
[40]   Informative Performance Measures for Continual Reinforcement Learning [J].
Denker, Yannick ;
Bagus, Benedikt ;
Krawczyk, Alexander ;
Gepperth, Alexander .
2024 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING, ICCP 2024, 2024, :387-392