SS-IL: Separated Softmax for Incremental Learning

被引:55
作者
Ahn, Hongjoon [1 ]
Kwak, Jihwan [4 ]
Lim, Subin [3 ]
Bang, Hyeonsu [1 ]
Kim, Hyojun [2 ]
Moon, Taesup [4 ]
机构
[1] Sungkyunkwan Univ, Dept Artificial Intelligence, Suwon, South Korea
[2] Sungkyunkwan Univ, Dept Elect & Elect Engn, Suwon, South Korea
[3] Sungkyunkwan Univ, Dept Comp Engn, Suwon, South Korea
[4] Seoul Natl Univ, Dept Elect & Comp Engn, Seoul, South Korea
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.00088
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider class incremental learning (CIL) problem, in which a learning agent continuously learns new classes from incrementally arriving training data batches and aims to predict well on all the classes learned so far. The main challenge of the problem is the catastrophic forgetting, and for the exemplar-memory based CIL methods, it is generally known that the forgetting is commonly caused by the classification score bias that is injected due to the data imbalance between the new classes and the old classes (in the exemplar-memory). While several methods have been proposed to correct such score bias by some additional post-processing, e.g., score re-scaling or balanced fine-tuning, no systematic analysis on the root cause of such bias has been done. To that end, we analyze that computing the softmax probabilities by combining the output scores for all old and new classes could be the main cause of the bias. Then, we propose a new method, dubbed as Separated Softmax for Incremental Learning (SS-IL), that consists of separated softmax (SS) output layer combined with task-wise knowledge distillation (TKD) to resolve such bias. Throughout our extensive experimental results on several large-scale CIL benchmark datasets, we show our SS-IL achieves strong state-of-the-art accuracy through attaining much more balanced prediction scores across old and new classes, without any additional post-processing.
引用
收藏
页码:824 / 833
页数:10
相关论文
共 35 条
  • [1] Ahn H, 2019, ADV NEUR IN, V32
  • [2] Al-Qizwini M, 2017, IEEE INT VEH SYM, P89, DOI 10.1109/IVS.2017.7995703
  • [3] [Anonymous], 2019, GOOGLE LANDMARKS DAT
  • [4] Stability of Solutions and Continuity of Solution Maps of Tensor Complementarity Problems
    Bai, Xue-Li
    Huang, Zheng-Hai
    Li, Xia
    [J]. ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH, 2019, 36 (02)
  • [5] A comprehensive study of class incremental learning algorithms for visual tasks
    Belouadah, Eden
    Popescu, Adrian
    Kanellos, Ioannis
    [J]. NEURAL NETWORKS, 2021, 135 : 38 - 54
  • [6] IL2M: Class Incremental Learning With Dual Memory
    Belouadah, Eden
    Popescu, Adrian
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 583 - 592
  • [7] Belouadah E, 2020, IEEE WINT CONF APPL, P1255, DOI [10.1109/wacv45572.2020.9093562, 10.1109/WACV45572.2020.9093562]
  • [8] Bowen Zhao, 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Proceedings, P13205, DOI 10.1109/CVPR42600.2020.01322
  • [9] Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
    Caruana, Rich
    Lou, Yin
    Gehrke, Johannes
    Koch, Paul
    Sturm, Marc
    Elhadad, Noemie
    [J]. KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 1721 - 1730
  • [10] End-to-End Incremental Learning
    Castro, Francisco M.
    Marin-Jimenez, Manuel J.
    Guil, Nicolas
    Schmid, Cordelia
    Alahari, Karteek
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 241 - 257