Class-incremental object detection

被引:10
作者
Dong, Na [1 ]
Zhang, Yongqiang [1 ]
Ding, Mingli [1 ]
Bai, Yancheng [2 ]
机构
[1] Harbin Inst Technol, Sch Instrument Sci & Engn, Harbin 150001, Peoples R China
[2] Chinese Acad Sci, Inst Software, Beijing 100049, Peoples R China
基金
美国国家科学基金会; 中国博士后科学基金;
关键词
Class -incremental learning; Object detection; Information asymmetry; Non -affection distillation; Deep learning;
D O I
10.1016/j.patcog.2023.109488
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning architectures have shown remarkable results in the object detection task. However, they experience a critical performance drop when they are required to learn new classes incrementally with-out forgetting old ones. This catastrophic forgetting phenomenon impedes the deployment of artificial intelligence in real word scenarios where systems need to learn new and different representations over time. Recently, many incremental learning methods have been proposed to avoid the catastrophic for-getting problem. However, current state-of-the-art class-incremental learning strategies aim at preserving the knowledge of old classes while learning new ones sequentially, which would encounter other prob-lems as follows: (1) In the process of preserving information of old classes, only a small portion of data in the previous tasks are kept and replayed during training, which inevitably incurs bias that is favor-able for the new classes but malicious to the old classes. (2) With the knowledge of previous classes distilled into the new model, a sub-optimal solution for the new task is obtained since the preserving process of previous classes sabotages the training of new classes. To address these issues, termed as In-formation Asymmetry (IA) , we propose a double-head framework which preserves the knowledge of old classes and learns the knowledge of new classes separately. Specifically, we transfer the knowledge of the previous model to the current learned one for overcoming the catastrophic forgetting problem. Fur-thermore, considering that IA would introduce impacts on the training of the new model, we propose a Non-Affection mask to distill the knowledge of the interested regions at the feature level. Comprehensive experimental results demonstrate that our proposed method significantly outperforms other state-of-the-art class-incremental object detection methods on PASCAL VOC and MS COCO datasets. (c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 61 条
[1]   Rainbow Memory: Continual Learning with a Memory of Diverse Samples [J].
Bang, Jihwan ;
Kim, Heesu ;
Yoo, YoungJoon ;
Ha, Jung-Woo ;
Choi, Jonghyun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8214-8223
[2]  
Beyer L., 2022, PROC IEEECVF C COMPU, P10925
[3]  
Carion N., 2020, P EUR C COMP VIS, P213
[4]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[5]   Learning without Memorizing [J].
Dhar, Prithviraj ;
Singh, Rajat Vikram ;
Peng, Kuan-Chuan ;
Wu, Ziyan ;
Chellappa, Rama .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5133-5141
[6]  
Dong N., 2022, APPL INTELL, P1
[7]  
Dong N., 2022, ARXIV
[8]  
Dong N, 2021, ADV NEUR IN, V34
[9]  
Dong N, 2023, Arxiv, DOI [arXiv:2205.04042, 10.48550/arXiv.2205.04042]
[10]   CenterNet: Keypoint Triplets for Object Detection [J].
Duan, Kaiwen ;
Bai, Song ;
Xie, Lingxi ;
Qi, Honggang ;
Huang, Qingming ;
Tian, Qi .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6568-6577