Single-Head Lifelong Learning Based on Distilling Knowledge

被引:4
|
作者
Wang, Yen-Hsiang [1 ]
Lin, Chih-Yang [2 ]
Thaipisutikul, Tipajin [3 ]
Shih, Timothy K. [1 ]
机构
[1] Natl Cent Univ, Dept Comp Sci & Informat Engn, Taoyuan 320, Taiwan
[2] Yuan Ze Univ, Dept Elect Engn, Taoyuan 32003, Taiwan
[3] Mahidol Univ, Fac Informat & Commun Technol, Salaya 73170, Thailand
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Task analysis; Neural networks; Training; Knowledge engineering; Data models; Testing; Predictive models; Lifelong learning; continuous learning; incremental learning; knowledge distillation; IMBALANCED DATA;
D O I
10.1109/ACCESS.2022.3155451
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Within the machine learning field, the main purpose of lifelong learning, also known as continuous learning, is to enable neural networks to learn continuously, as humans do. Lifelong learning accumulates the knowledge learned from previous tasks and transfers it to support the neural network in future tasks. This technique not only avoids the catastrophic forgetting problem with previous tasks when training new tasks, but also makes the model more robust with the temporal evolution. Motivated by the recent intervention of the lifelong learning technique, this paper presents a novel feature-based knowledge distillation method that differs from the existing methods of knowledge distillation in lifelong learning. Specifically, our proposed method utilizes the features from intermediate layers and compresses them in a unique way that involves global average pooling and fully connected layers. The authors then use the output of this branch network to deliver information from previous tasks to the model in the future. Extensive experiments show that our proposed model consistency outperforms the state-of-the-art baselines with the accuracy metric by at least two percent improvement under different experimental settings.
引用
收藏
页码:35469 / 35478
页数:10
相关论文
共 50 条
  • [21] Task-Based Neuromodulation Architecture for Lifelong Learning
    Daram, Anurag Reddy
    Kudithipudi, Dhireesha
    Yanguas-Gil, Angel
    2018 FOURTH INTERNATIONAL CONFERENCE ON COMPUTING COMMUNICATION CONTROL AND AUTOMATION (ICCUBEA), 2018,
  • [22] Education Mining in the Relationship between General Knowledge and Deep Knowledge for Lifelong Learning
    Nuankaew, Pratya
    Nuankaew, Wongpanya
    Bussaman, Sittichai
    Jedeejit, Ploykwan
    2017 14TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING/ELECTRONICS, COMPUTER, TELECOMMUNICATIONS AND INFORMATION TECHNOLOGY (ECTI-CON), 2017, : 694 - 697
  • [23] Lifelong Learning for Text Steganalysis Based on Chronological Task Sequence
    Wen, Juan
    Deng, Yaqian
    Wu, Jiaxuan
    Liu, Xingpeng
    Xue, Yiming
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2412 - 2416
  • [24] Lifelong Learning in Sensor-Based Human Activity Recognition
    Ye, Juan
    Dobson, Simon
    Zambonelli, Franco
    IEEE PERVASIVE COMPUTING, 2019, 18 (03) : 49 - 58
  • [25] Continuous image anomaly detection based on contrastive lifelong learning
    Fan, Wentao
    Shangguan, Weimin
    Bouguila, Nizar
    APPLIED INTELLIGENCE, 2023, 53 (14) : 17693 - 17707
  • [26] Continuous image anomaly detection based on contrastive lifelong learning
    Wentao Fan
    Weimin Shangguan
    Nizar Bouguila
    Applied Intelligence, 2023, 53 : 17693 - 17707
  • [27] A Lifelong Learning Method Based on Generative Feature Replay for Bearing Diagnosis With Incremental Fault Types
    Liu, Yao
    Chen, Bojian
    Wang, Dong
    Kong, Lin
    Shi, Juanjuan
    Shen, Changqing
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [28] Overcoming the Knowledge Bottleneck Using Lifelong Learning by Social Agents
    Nirenburg, Sergei
    McShane, Marjorie
    English, Jesse
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2021), 2021, 12801 : 24 - 29
  • [29] Dynamic Model of Knowledge and Lifelong Learning Strategy in Economic Growth
    Corbu, Luminita-Claudia
    Hapenciuc, Cristian-Valentin
    14TH ECONOMIC INTERNATIONAL CONFERENCE: STRATEGIES AND DEVELOPMENT POLICIES OF TERRITORIES: INTERNATIONAL, COUNTRY, REGION, CITY, LOCATION CHALLENGES, 2018, : 22 - 27
  • [30] Knowledge Distillation-Based Domain-Invariant Representation Learning for Domain Generalization
    Niu, Ziwei
    Yuan, Junkun
    Ma, Xu
    Xu, Yingying
    Liu, Jing
    Chen, Yen-Wei
    Tong, Ruofeng
    Lin, Lanfen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 245 - 255