Heterogeneous Forgetting Compensation for Class-Incremental Learning

被引:16
作者
Dong, Jiahua [1 ,2 ,3 ]
Liang, Wenqi [1 ,2 ,3 ]
Cong, Yang [4 ]
Sun, Gan [1 ,2 ]
机构
[1] Chinese Acad Sci, Shenyang Inst Automat, State Key Lab Robot, Shenyang 110016, Peoples R China
[2] Chinese Acad Sci, Inst Robot & Intelligent Mfg, Shenyang 110169, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[4] South China Univ Technol, Guangzhou 510640, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023) | 2023年
关键词
D O I
10.1109/ICCV51070.2023.01078
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Class-incremental learning (CIL) has achieved remarkable successes in learning new classes consecutively while overcoming catastrophic forgetting on old categories. However, most existing CIL methods unreasonably assume that all old categories have the same forgetting pace, and neglect negative influence of forgetting heterogeneity among different old classes on forgetting compensation. To surmount the above challenges, we develop a novel Heterogeneous Forgetting Compensation (HFC) model, which can resolve heterogeneous forgetting of easy-to-forget and hard-to-forget old categories from both representation and gradient aspects. Specifically, we design a task-semantic aggregation block to alleviate heterogeneous forgetting from representation aspect. It aggregates local category information within each task to learn task-shared global representations. Moreover, we develop two novel plug-and-play losses: a gradient-balanced forgetting compensation loss and a gradient-balanced relation distillation loss to alleviate forgetting from gradient aspect. They consider gradient-balanced compensation to rectify forgetting heterogeneity of old categories and heterogeneous relation consistency. Experiments on several representative datasets illustrate effectiveness of our HFC model. The code is available at https://github.com/JiahuaDong/HFC.
引用
收藏
页码:11708 / 11717
页数:10
相关论文
共 61 条
[1]   Conditional Channel Gated Networks for Task-Aware Continual Learning [J].
Abati, Davide ;
Tomczak, Jakub ;
Blankevoort, Tijmen ;
Calderara, Simone ;
Cucchiara, Rita ;
Bejnordi, Babak Ehteshami .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3930-3939
[2]   SS-IL: Separated Softmax for Incremental Learning [J].
Ahn, Hongjoon ;
Kwak, Jihwan ;
Lim, Subin ;
Bang, Hyeonsu ;
Kim, Hyojun ;
Moon, Taesup .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :824-833
[3]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[4]  
[Anonymous], 2017, ICML
[5]  
[Anonymous], 2022, CVPR, DOI DOI 10.1109/CVPR52688.2022.01560
[6]  
[Anonymous], 2022, EUR C COMP VIS, DOI DOI 10.1007/978-3-031-19812-07
[7]  
[Anonymous], 2022, ECCV, DOI DOI 10.1007/978-3-031-19830-4_4
[8]  
[Anonymous], 2021, AAAI
[9]  
[Anonymous], 2022, CVPR 2022
[10]  
[Anonymous], CVPR, DOI DOI 10.1038/S41588-021-00790-6