IMPROVING FEATURE GENERALIZABILITY WITH MULTITASK LEARNING IN CLASS INCREMENTAL LEARNING

被引:4
作者
Ma, Dong [1 ,2 ]
Tang, Chi Ian [1 ]
Mascolo, Cecilia [1 ]
机构
[1] Univ Cambridge, Cambridge, England
[2] Singapore Management Univ, Singapore, Singapore
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
基金
欧洲研究理事会;
关键词
Class Incremental Learning; Continual Learning; Multitask Learning; Keyword Spotting;
D O I
10.1109/ICASSP43922.2022.9746862
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Many deep learning applications, like keyword spotting [1, 2], require the incorporation of new concepts (classes) over time, referred to as Class Incremental Learning (CIL). The major challenge in CIL is catastrophic forgetting, i.e., preserving as much of the old knowledge as possible while learning new tasks. Various techniques, such as regularization, knowledge distillation, and the use of exemplars, have been proposed to resolve this issue. However, prior works primarily focus on the incremental learning step, while ignoring the optimization during the base model training. We hypothesise that a more transferable and generalizable feature representation from the base model would be beneficial to incremental learning. In this work, we adopt multitask learning during base model training to improve the feature generalizability. Specifically, instead of training a single model with all the base classes, we decompose the base classes into multiple subsets and regard each of them as a task. These tasks are trained concurrently and a shared feature extractor is obtained for incremental learning. We evaluate our approach on two datasets under various configurations. The results show that our approach enhances the average incremental learning accuracy by up to 5.5%, which enables more reliable and accurate keyword spotting over time. Moreover, the proposed approach can be combined with many existing techniques and provides additional performance gain.
引用
收藏
页码:4173 / 4177
页数:5
相关论文
共 27 条
[1]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[2]  
[Anonymous], 2005, INTERSPEECH
[3]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[4]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[5]  
Chen G, 2014, CHIN CONTR CONF, P1087, DOI 10.1109/ChiCC.2014.6896779
[6]  
Diethe T., 2019, ARXIV190305202
[7]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[8]   REMIND Your Neural Network to Prevent Catastrophic Forgetting [J].
Hayes, Tyler L. ;
Kafle, Kushal ;
Shrestha, Robik ;
Acharya, Manoj ;
Kanan, Christopher .
COMPUTER VISION - ECCV 2020, PT VIII, 2020, 12353 :466-483
[9]  
Hinton G., 2015, arXiv
[10]   Learning a Unified Classifier Incrementally via Rebalancing [J].
Hou, Saihui ;
Pan, Xinyu ;
Loy, Chen Change ;
Wang, Zilei ;
Lin, Dahua .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :831-839