Class similarity weighted knowledge distillation for few shot incremental learning

被引:2
|
作者
Akmel, Feidu [1 ]
Meng, Fanman [1 ]
Wu, Qingbo [1 ]
Chen, Shuai [1 ]
Zhang, Runtong [1 ]
Assefa, Maregu [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu, Peoples R China
关键词
Knowledge distillation; Semantic information; Few shot; Incremental learning;
D O I
10.1016/j.neucom.2024.127587
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few -shot class incremental learning illustrates the challenges of learning new concepts, where the learner can access only a small sample per concept. The standard incremental learning techniques cannot be applied directly because of the small number of samples for training. Moreover, catastrophic forgetting is the propensity of an Artificial Neural Network to fully and abruptly forget previously learned knowledge upon learning new knowledge. This problem happens due to a lack of supervision in older classes or an imbalance between the old and new classes. In this work, we propose a new distillation structure to tackle the forgetting and overfitting issues. Particularly, we suggest a dual distillation module that adaptably draws knowledge from two different but complementary teachers. The first teacher is the base model, which has been trained on large class data, and the second teacher is the updated model from the previous K-1 session, which contains the modified knowledge of previously observed new classes. Thus, the first teacher can reduce overfitting issues by transferring the knowledge obtained from the base classes to the new classes. While the second teacher can reduce knowledge forgetting by distilling knowledge from the previous model. Additionally, we use semantic information as word embedding to facilitate the distillation process. To align visual and semantic vectors, we used the attention mechanism of the embedding of visual data. With extensive experiments on different data sets such as Mini-ImageNet, CIFAR100, and CUB200, our model shows state-of-the-art performance compared to the existing few shot incremental learning methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Few-shot class incremental learning via prompt transfer and knowledge distillation
    Akmel, Feidu
    Meng, Fanman
    Liu, Mingyu
    Zhang, Runtong
    Teka, Asebe
    Lemuye, Elias
    IMAGE AND VISION COMPUTING, 2024, 151
  • [2] Uncertainty-Guided Semi-Supervised Few-Shot Class-Incremental Learning With Knowledge Distillation
    Cui, Yawen
    Deng, Wanxia
    Xu, Xin
    Liu, Zhen
    Liu, Zhong
    Pietikainen, Matti
    Liu, Li
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6422 - 6435
  • [3] Knowledge Representation by Generic Models for Few-Shot Class-Incremental Learning
    Chen, Xiaodong
    Jiang, Weijie
    Huang, Zhiyong
    Su, Jiangwen
    Yu, Yuanlong
    ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022, 2023, 153 : 1237 - 1247
  • [4] Multi-feature space similarity supplement for few-shot class incremental learning
    Xu, Xinlei
    Niu, Saisai
    Wang, Zhe
    Guo, Wei
    Jing, Lihong
    Yang, Hai
    KNOWLEDGE-BASED SYSTEMS, 2023, 265
  • [5] Hierarchical Knowledge Propagation and Distillation for Few-Shot Learning
    Zhou, Chunpeng
    Wang, Haishuai
    Zhou, Sheng
    Yu, Zhi
    Bandara, Danushka
    Bu, Jiajun
    NEURAL NETWORKS, 2023, 167 : 615 - 625
  • [6] Graph Few-shot Class-incremental Learning
    Tan, Zhen
    Ding, Kaize
    Guo, Ruocheng
    Liu, Huan
    WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2022, : 987 - 996
  • [7] A Few-Shot Class-Incremental Learning Approach for Intrusion Detection
    Wang, Tingting
    Lv, Qiujian
    Hu, Bo
    Sun, Degang
    30TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2021), 2021,
  • [8] Few-Shot Class Incremental Learning for Packaging Defects with Double Class Feature Interaction
    Zhu, Liming
    Zhang, Bo
    Zhao, Dan
    Hu, Hongtao
    PROCEEDINGS OF THE 36TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC 2024, 2024, : 5113 - 5118
  • [9] Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning
    Cui, Yawen
    Deng, Wanxia
    Chen, Haoyu
    Liu, Li
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 14259 - 14272
  • [10] Double distillation for class incremental learning
    Onchis, Darian M.
    Samuila, Ioan-Valentin
    2021 23RD INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING (SYNASC 2021), 2021, : 182 - 185