Incremental Few-Shot Semantic Segmentation via Embedding Adaptive-Update and Hyper-class Representation

被引:23
作者
Shi, Guangchen [1 ]
Wu, Yirui [1 ]
Liu, Jun [2 ]
Wan, Shaohua [3 ]
Wang, Wenhai [4 ]
Lu, Tong [5 ]
机构
[1] Hohai Univ, Collage Comp & Informat, Nanjing, Peoples R China
[2] Singapore Univ Technol & Design, Informat Syst Technol & Design Pillar, Singapore, Singapore
[3] Univ Elect Sci & Technol China, Shenzhen Inst Adv Study, Shenzhen, Peoples R China
[4] Shanghai AI Lab, Shanghai, Peoples R China
[5] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing, Peoples R China
来源
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022 | 2022年
基金
国家重点研发计划; 新加坡国家研究基金会;
关键词
incremental learning; few-shot learning; semantic segmentation; adaptive update; hyper-class representation;
D O I
10.1145/3503161.3548218
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Incremental few-shot semantic segmentation (IFSS) targets at incrementally expanding model's capacity to segment new class of images supervised by only a few samples. However, features learned on old classes could significantly drift, causing catastrophic forgetting. Moreover, few samples for pixel-level segmentation on new classes lead to notorious overfitting issues in each learning session. In this paper, we explicitly represent class-based knowledge for semantic segmentation as a category embedding and a hyper-class embedding, where the former describes exclusive semantical properties, and the latter expresses hyper-class knowledge as class-shared semantic properties. Aiming to solve IFSS problems, we present EHNet, i.e., Embedding adaptive-update and Hyper-class representation Network from two aspects. First, we propose an embedding adaptive-update strategy to avoid feature drift, which maintains old knowledge by hyper-class representation, and adaptively update category embeddings with a class-attention scheme to involve new classes learned in individual sessions. Second, to resist overfitting issues caused by few training samples, a hyper-class embedding is learned by clustering all category embeddings for initialization and aligned with category embedding of the new class for enhancement, where learned knowledge assists to learn new knowledge, thus alleviating performance dependence on training data scale. Significantly, these two designs provide representation capability for classes with sufficient semantics and limited biases, enabling to perform segmentation tasks requiring high semantic dependence. Experiments on PASCAL-5.. and COCO datasets show that EHNet achieves new state-of-the-art performance with remarkable advantages.
引用
收藏
页码:5547 / 5556
页数:10
相关论文
共 46 条
[11]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338
[12]  
Finn C, 2017, PR MACH LEARN RES, V70
[13]   Incremental Few-Shot Instance Segmentation [J].
Ganea, Dan Andrei ;
Boom, Bas ;
Poppe, Ronald .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :1185-1194
[14]  
Gong XW, 2020, AAAI CONF ARTIF INTE, V34, P4012
[15]  
Gu YA, 2021, AAAI CONF ARTIF INTE, V35, P1478
[16]   Simultaneous Detection and Segmentation [J].
Hariharan, Bharath ;
Arbelaez, Pablo ;
Girshick, Ross ;
Malik, Jitendra .
COMPUTER VISION - ECCV 2014, PT VII, 2014, 8695 :297-312
[17]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[18]   Renal tumors segmentation in abdomen CT Images using 3D-CNN and ConvLSTM [J].
Kang, Li ;
Zhou, Ziqi ;
Huang, Jianjun ;
Han, Wenzhong .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 72
[19]  
Koch G., 2015, ICML DEEP LEARN WORK, V2
[20]   Meta-Learning with Differentiable Convex Optimization [J].
Lee, Kwonjoon ;
Maji, Subhransu ;
Ravichandran, Avinash ;
Soatto, Stefano .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10649-10657