Continual learning with attentive recurrent neural networks for temporal data classification

被引:11
作者
Yin, Shao-Yu [1 ]
Huang, Yu [1 ]
Chang, Tien-Yu [1 ]
Chang, Shih-Fang [1 ,3 ]
Tseng, Vincent S. [1 ,2 ]
机构
[1] Natl Yang Ming Chiao Tung Univ, Dept Comp Sci, Hsinchu, Taiwan
[2] Natl Chung Hsing Univ, Dept Management Informat Syst, Taichung, Taiwan
[3] Ind Technol Res Inst Hsinchu, Informat & Commun Res Labs, Hsinchu, Taiwan
关键词
Continual learning; Temporal data classification; Recurrent neural networks; Deep learning; HUMAN ACTIVITY RECOGNITION; LIFELONG;
D O I
10.1016/j.neunet.2022.10.031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning is an emerging research branch of deep learning, which aims to learn a model for a series of tasks continually without forgetting knowledge obtained from previous tasks. Despite receiving a lot of attention in the research community, temporal-based continual learning techniques are still underutilized. In this paper, we address the problem of temporal-based continual learning by allowing a model to continuously learn on temporal data. To solve the catastrophic forgetting problem of learning temporal data in task incremental scenarios, in this research, we propose a novel method based on attentive recurrent neural networks, called Temporal Teacher Distillation (TTD). TTD solves the catastrophic forgetting problem in an attentive recurrent neural network based on three hypotheses, namely Rotation Hypothesis, Redundant Hypothesis, and Recover Hypothesis. Rotation Hypothesis and Redundant hypotheses could cause the attention shift phenomenon, which degrades the model performance on the learned tasks. Moreover, not considering the Recover Hypothesis increases extra memory usage in continuously training different tasks. Therefore, the proposed TTD based on the above hypotheses complements the inadequacy of the existing methods for temporal-based continual learning. For evaluating the performance of our proposed method in task incremental setting, we use a public dataset, WIreless Sensor Data Mining (WISDM), and a synthetic dataset, Split-QuickDraw-100. According to experimental results, the proposed TTD significantly outperforms state-of-the-art methods by up to 14.6% and 45.1% in terms of accuracy and forgetting measures, respectively. To the best of our knowledge, this is the first work that studies continual learning in real-world incremental categories for temporal data classification with attentive recurrent neural networks and provides the proper application-oriented scenario.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页码:171 / 187
页数:17
相关论文
共 46 条
  • [1] Rusu AA, 2016, Arxiv, DOI arXiv:1606.04671
  • [2] Expert Gate: Lifelong Learning with a Network of Experts
    Aljundi, Rahaf
    Chakravarty, Punarjay
    Tuytelaars, Tinne
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 7120 - 7129
  • [3] Bengio Y., 2009, P 26 ANN INT C MACH, P41, DOI DOI 10.1145/1553374.1553380
  • [4] Chaudhry A, 2019, ICLR
  • [5] Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
    Chaudhry, Arslan
    Dokania, Puneet K.
    Ajanthan, Thalaiyasingam
    Torr, Philip H. S.
    [J]. COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 : 556 - 572
  • [6] d'Autume CD, 2019, ADV NEUR IN, V32
  • [7] A Continual Learning Survey: Defying Forgetting in Classification Tasks
    De Lange, Matthias
    Aljundi, Rahaf
    Masana, Marc
    Parisot, Sarah
    Jia, Xu
    Leonardis, Ales
    Slabaugh, Greg
    Tuytelaars, Tinne
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) : 3366 - 3385
  • [8] De Lange M, 2021, Arxiv, DOI arXiv:1909.08383
  • [9] Del Chiaro R., 2020, Advances in Neural Information Processing Systems, V33, P16736
  • [10] Learning without Memorizing
    Dhar, Prithviraj
    Singh, Rajat Vikram
    Peng, Kuan-Chuan
    Wu, Ziyan
    Chellappa, Rama
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5133 - 5141