Efficient Perturbation Inference and Expandable Network for continual learning

被引:9
|
作者
Du, Fei [1 ]
Yang, Yun [2 ]
Zhao, Ziyuan [3 ]
Zeng, Zeng [3 ,4 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650091, Peoples R China
[2] Yunnan Univ, Natl Pilot Sch Software, Kunming 650091, Peoples R China
[3] ASTAR, Inst Infocomm Res I2R, Singapore 138632, Singapore
[4] Shanghai Univ, Sch Microelect, Shanghai, Peoples R China
关键词
Continual learning; Dynamic networks; Class incremental learning; Uncertainty inference;
D O I
10.1016/j.neunet.2022.10.030
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although humans are capable of learning new tasks without forgetting previous ones, most neural networks fail to do so because learning new tasks could override the knowledge acquired from previous data. In this work, we alleviate this issue by proposing a novel Efficient Perturbation Inference and Expandable Network (EPIE-Net), which dynamically expands lightweight task-specific decoders for new classes and utilizes a mixed-label uncertainty strategy to improve the robustness. Moreover, we calculate the average probability of perturbed samples at inference, which can generally improve the performance of the model. Experimental results show that our method consistently outperforms other methods with fewer parameters in class incremental learning benchmarks. For example, on the CIFAR100 10 steps setup, our method achieves an average accuracy of 76.33% and the last accuracy of 65.93% within only 3.46M average parameters.(c) 2022 Published by Elsevier Ltd.
引用
收藏
页码:97 / 106
页数:10
相关论文
共 50 条
  • [42] XST: A Crossbar Column-wise Sparse Training for Efficient Continual Learning
    Zhang, Fan
    Yang, Li
    Meng, Jian
    Seo, Jae-sun
    Cao, Yu
    Fan, Deliang
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 48 - 51
  • [43] Continual learning in medical image analysis: A survey
    Wu, Xinyao
    Xu, Zhe
    Tong, Raymond Kai-yu
    Computers in Biology and Medicine, 2024, 182
  • [44] Gating Mechanism in Deep Neural Networks for Resource-Efficient Continual Learning
    Jin, Hyundong
    Yun, Kimin
    Kim, Eunwoo
    IEEE ACCESS, 2022, 10 : 18776 - 18786
  • [45] Open-world continual learning: Unifying novelty detection and continual learning
    Kim, Gyuhak
    Xiao, Changnan
    Konishi, Tatsuya
    Ke, Zixuan
    Liu, Bing
    ARTIFICIAL INTELLIGENCE, 2025, 338
  • [46] Communication-efficient federated continual learning for distributed learning system with Non-IID data
    Zhang, Zhao
    Zhang, Yong
    Guo, Da
    Zhao, Shuang
    Zhu, Xiaolin
    SCIENCE CHINA-INFORMATION SCIENCES, 2023, 66 (02)
  • [47] Continual learning with high-order experience replay for dynamic network embedding
    Wang, Zhizheng
    Sun, Yuanyuan
    Zhang, Xiaokun
    Xu, Bo
    Yang, Zhihao
    Lin, Hongfei
    PATTERN RECOGNITION, 2025, 159
  • [48] Communication-efficient federated continual learning for distributed learning system with Non-IID data
    Zhao Zhang
    Yong Zhang
    Da Guo
    Shuang Zhao
    Xiaolin Zhu
    Science China Information Sciences, 2023, 66
  • [49] Incremental sequential three-way decision based on continual learning network
    Li, Hongyuan
    Yu, Hong
    Min, Fan
    Liu, Dun
    Li, Huaxiong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (06) : 1633 - 1645
  • [50] Incremental sequential three-way decision based on continual learning network
    Hongyuan Li
    Hong Yu
    Fan Min
    Dun Liu
    Huaxiong Li
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 1633 - 1645