Statistical Mechanical Analysis of Catastrophic Forgetting in Continual Learning with Teacher and Student Networks

被引:11
作者
Asanuma, Haruka [1 ]
Takagi, Shiro [1 ]
Nagano, Yoshihiro [1 ]
Yoshida, Yuki [1 ]
Igarashi, Yasuhiko [2 ,3 ]
Okada, Masato [1 ]
机构
[1] Univ Tokyo, Dept Complex Sci & Engn, Kashiwa, Chiba 2778561, Japan
[2] Univ Tsukuba, Fac Engn Informat & Syst, Tsukuba, Ibaraki 3058573, Japan
[3] Japan Sci & Technol Agcy, PRESTO, Kawaguchi, Saitama 3320012, Japan
关键词
D O I
10.7566/JPSJ.90.104001
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
When a computational system continuously learns from an ever-changing environment, it rapidly forgets its past experiences. This phenomenon is called catastrophic forgetting. While a line of studies has been proposed with respect to avoiding catastrophic forgetting, most of the methods are based on intuitive insights into the phenomenon, and their performances have been evaluated by numerical experiments using benchmark datasets. Therefore, in this study, we provide the theoretical framework for analyzing catastrophic forgetting by using teacher-student learning. Teacher- student learning is a framework in which we introduce two neural networks: one neural network is a target function in supervised learning, and the other is a learning neural network. To analyze continual learning in the teacher-student framework, we introduce the similarity of the input distribution and the input-output relationship of the target functions as the similarity of tasks. In this theoretical framework, we also provide a qualitative understanding of how a single-layer linear learning neural network forgets tasks. Based on the analysis, we find that the network can avoid catastrophic forgetting when the similarity among input distributions is small and that of the input-output relationship of the target functions is large. The analysis also suggests that a system often exhibits a characteristic phenomenon called overshoot, which means that even if the learning network has once undergone catastrophic forgetting, it is possible that the network may perform reasonably well after further learning of the current task.
引用
收藏
页数:9
相关论文
共 20 条
[1]  
Advani M., 2019, ADV NEURAL INFORM PR, P6981
[2]  
[Anonymous], ARXIV160604671
[3]   FATE OF 1ST-LIST ASSOCIATIONS IN TRANSFER THEORY [J].
BARNES, JM ;
UNDERWOOD, BJ .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1959, 58 (02) :97-105
[4]  
Bennani M. A., ARXIV200611942
[5]   A Continual Learning Survey: Defying Forgetting in Classification Tasks [J].
De Lange, Matthias ;
Aljundi, Rahaf ;
Masana, Marc ;
Parisot, Sarah ;
Jia, Xu ;
Leonardis, Ales ;
Slabaugh, Greg ;
Tuytelaars, Tinne .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) :3366-3385
[6]   Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model [J].
Goldt, Sebastian ;
Mezard, Marc ;
Krzakala, Florent ;
Zdeborova, Lenka .
PHYSICAL REVIEW X, 2020, 10 (04)
[7]  
Isele D, 2018, AAAI CONF ARTIF INTE, P3302
[8]  
Jacot A, 2018, ADV NEUR IN, V31
[9]   Overcoming catastrophic forgetting in neural networks [J].
Kirkpatricka, James ;
Pascanu, Razvan ;
Rabinowitz, Neil ;
Veness, Joel ;
Desjardins, Guillaume ;
Rusu, Andrei A. ;
Milan, Kieran ;
Quan, John ;
Ramalho, Tiago ;
Grabska-Barwinska, Agnieszka ;
Hassabis, Demis ;
Clopath, Claudia ;
Kumaran, Dharshan ;
Hadsell, Raia .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) :3521-3526
[10]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324