Background Adaptation with Residual Modeling for Exemplar-Free Class-Incremental Semantic Segmentation

被引:0
作者
Zhang, Anqi [1 ]
Gao, Guangyu [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
来源
COMPUTER VISION - ECCV 2024, PT LII | 2025年 / 15110卷
基金
中国国家自然科学基金;
关键词
HIPPOCAMPUS;
D O I
10.1007/978-3-031-72943-0_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Class Incremental Semantic Segmentation (CISS), within Incremental Learning for semantic segmentation, targets segmenting new categories while reducing the catastrophic forgetting on the old categories. Besides, background shifting, where the background category changes constantly in each step, is a special challenge for CISS. Current methods with a shared background classifier struggle to keep up with these changes, leading to decreased stability in background predictions and reduced accuracy of segmentation. For this special challenge, we designed a novel background adaptation mechanism, which explicitly models the background residual rather than the background itself in each step, and aggregates these residuals to represent the evolving background. Therefore, the background adaptation mechanism ensures the stability of previous background classifiers, while enabling the model to concentrate on the easy-learned residuals from the additional channel, which enhances background discernment for better prediction of novel categories. To precisely optimize the background adaptation mechanism, we propose Pseudo Background Binary Cross-Entropy loss and Background Adaptation losses, which amplify the adaptation effect. Group Knowledge Distillation and Background Feature Distillation strategies are designed to prevent forgetting old categories. Our approach, evaluated across various incremental scenarios on Pascal VOC 2012 and ADE20K datasets, outperforms prior exemplar-free state-of-the-art methods with mIoU of 3.0% in VOC 10-1 and 2.0% in ADE 100-5, notably enhancing the accuracy of new classes while mitigating catastrophic forgetting. Code is available in https://andyzaq.github.io/barmsite/.
引用
收藏
页码:166 / 183
页数:18
相关论文
共 65 条
[1]  
Ahn H, 2019, ADV NEUR IN, V32
[2]  
Aljundi R, 2019, ADV NEUR IN, V32
[3]  
Baek Donghyeon, 2022, Advances in Neural Information Processing Systems
[4]  
Borsos Z, 2020, Arxiv, DOI arXiv:2006.03875
[5]   Transfer Without Forgetting [J].
Boschini, Matteo ;
Bonicelli, Lorenzo ;
Porrello, Angelo ;
Bellitto, Giovanni ;
Pennisi, Matteo ;
Palazzo, Simone ;
Spampinato, Concetto ;
Calderara, Simone .
COMPUTER VISION, ECCV 2022, PT XXIII, 2022, 13683 :692-709
[6]   Modeling the Background for Incremental Learning in Semantic Segmentation [J].
Cermelli, Fabio ;
Mancini, Massimiliano ;
Bulo, Samuel Rota ;
Ricci, Elisa ;
Caputo, Barbara .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :9230-9239
[7]   Co2L: Contrastive Continual Learning [J].
Cha, Hyuntak ;
Lee, Jaeho ;
Shin, Jinwoo .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9496-9505
[8]  
Cha S, 2021, ADV NEUR IN
[9]  
Chen J., 2023, NEURIPS
[10]  
Chen LC, 2016, Arxiv, DOI [arXiv:1412.7062, 10.48550/arXiv.1412.7062]