Prototype-wise self-knowledge distillation for few-shot segmentation

被引:1
作者
Chen, Yadang [1 ,2 ]
Xu, Xinyu [1 ,2 ]
Wei, Chenchen [1 ,2 ]
Lu, Chuhan [3 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Engn Res Ctr Digital Forens, Minist Educ, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Atmospher Sci, Nanjing 210044, Peoples R China
关键词
Few-shot segmentation; Data augmentation; Self-knowledge distillation;
D O I
10.1016/j.image.2024.117186
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Few-shot segmentation was proposed to obtain segmentation results for a image with an unseen class by referring to a few labeled samples. However, due to the limited number of samples, many few-shot segmentation models suffer from poor generalization. Prototypical network-based few-shot segmentation still has issues with spatial inconsistency and prototype bias. Since the target class has different appearance in each image, some specific features in the prototypes generated from the support image and its mask do not accurately reflect the generalized features of the target class. To address the support prototype consistency issue, we put forward two modules: Data Augmentation Self-knowledge Distillation (DASKD) and Prototype-wise Regularization (PWR). The DASKD module focuses on enhancing spatial consistency by using data augmentation and self-knowledge distillation. Self-knowledge distillation helps the model acquire generalized features of the target class and learn hidden knowledge from the support images. The PWR module focuses on obtaining a more representative support prototype by conducting prototype-level loss to obtain support prototypes closer to the category center. Broad evaluation experiments on PASCAL-5(t) and COCO-20(t) demonstrate that our model outperforms the prior works on few-shot segmentation. Our approach surpasses the state of the art by 7.5% in PASCAL-5(t) and 4.2% in COCO-20(t).
引用
收藏
页数:9
相关论文
共 45 条
[21]   Progressive Parsing and Commonality Distillation for Few-Shot Remote Sensing Segmentation [J].
Lang, Chunbo ;
Wang, Junyi ;
Cheng, Gong ;
Tu, Binfei ;
Han, Junwei .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
[22]   Base and Meta: A New Perspective on Few-Shot Segmentation [J].
Lang, Chunbo ;
Cheng, Gong ;
Tu, Binfei ;
Li, Chao ;
Han, Junwei .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) :10669-10686
[23]   Learning What Not to Segment: A New Perspective on Few-Shot Segmentation [J].
Lang, Chunbo ;
Cheng, Gong ;
Tu, Binfei ;
Han, Junwei .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :8047-8057
[24]   Adaptive Prototype Learning and Allocation for Few-Shot Segmentation [J].
Li, Gen ;
Jampani, Varun ;
Sevilla-Lara, Laura ;
Sun, Deqing ;
Kim, Jonghyun ;
Kim, Joongkyu .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8330-8339
[25]   FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation [J].
Li, Xiang ;
Wei, Tianhan ;
Chen, Yau Pun ;
Tai, Yu-Wing ;
Tang, Chi-Keung .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :2866-2875
[26]  
Li YW, 2021, Arxiv, DOI arXiv:2110.11742
[27]   Part-Aware Prototype Network for Few-Shot Semantic Segmentation [J].
Liu, Yongfei ;
Zhang, Xiangyi ;
Zhang, Songyang ;
He, Xuming .
COMPUTER VISION - ECCV 2020, PT IX, 2020, 12354 :142-158
[28]   Learning Non-target Knowledge for Few-shot Semantic Segmentation [J].
Liu, Yuanwei ;
Liu, Nian ;
Cao, Qinglong ;
Yao, Xiwen ;
Han, Junwei ;
Shao, Ling .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :11563-11572
[29]  
Min JH, 2021, Arxiv, DOI arXiv:2104.01538
[30]  
Morcos A.S., 2018, ARXIV180306959, P1