Uncertainty guided semi-supervised few-shot segmentation with prototype level fusion

被引:1
作者
Wang, Hailing [1 ,2 ]
Wu, Chunwei [1 ,2 ]
Zhang, Hai [1 ,2 ]
Cao, Guitao [1 ,2 ]
Cao, Wenming [3 ]
机构
[1] East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China
[2] East China Normal Univ, MOE Res Ctr Software Hardware Codesign Engn, Shanghai 200062, Peoples R China
[3] Shenzhen Univ, Coll Informat Engn, Shenzhen 518060, Peoples R China
关键词
Uncertainty; Prototype learning; Semi-supervised learning; Few-shot semantic segmentation; Prototype-level fusion strategy; NETWORK;
D O I
10.1016/j.neunet.2024.106802
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-Shot Semantic Segmentation (FSS) aims to tackle the challenge of segmenting novel categories with limited annotated data. However, given the diversity among support-query pairs, transferring meta-knowledge to unseen categories poses a significant challenge, particularly in scenarios featuring substantial intra-class variance within an episode task. To alleviate this issue, we propose the Uncertainty Guided Adaptive Prototype Network (UGAPNet) for semi-supervised few-shot semantic segmentation. The key innovation lies in the generation of reliable pseudo-prototypes as an additional supplement to alleviate intra-class semantic bias. Specifically, we employ a shared meta-learner to produce segmentation results for unlabeled images in the pseudo-label prediction module. Subsequently, we incorporate an uncertainty estimation module to quantify the difference between prototypes extracted from query and support images, facilitating pseudo- label denoising. Utilizing these refined pseudo-label samples, we introduce a prototype rectification module to obtain effective pseudo-prototypes and generate a generalized adaptive prototype for the segmentation of query images. Furthermore, generalized few-shot semantic segmentation extends the paradigm of few-shot semantic segmentation by simultaneously segmenting both unseen and seen classes during evaluation. To address the challenge of confusion region prediction between these two categories, we further propose a novel Prototype-Level Fusion Strategy in the prototypical contrastive space. Extensive experiments conducted on two benchmarks demonstrate the effectiveness of the proposed UGAPNet and prototype-level fusion strategy. Our source code will be available on https://github.com/WHL182/UGAPNet.
引用
收藏
页数:13
相关论文
共 38 条
[1]   APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation [J].
Chen, Jiacheng ;
Gao, Bin-Bin ;
Lu, Zongqing ;
Xue, Jing-Hao ;
Wang, Chengjie ;
Liao, Qingmin .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :4361-4373
[2]   Holistic Prototype Activation for Few-Shot Segmentation [J].
Cheng, Gong ;
Lang, Chunbo ;
Han, Junwei .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) :4650-4666
[3]   Self-regularized prototypical network for few-shot semantic segmentation [J].
Ding, Henghui ;
Zhang, Hui ;
Jiang, Xudong .
PATTERN RECOGNITION, 2023, 133
[4]  
Dong N., 2018, P BMVC, V3
[5]   Self-support Few-Shot Semantic Segmentation [J].
Fan, Qi ;
Pei, Wenjie ;
Tai, Yu-Wing ;
Tang, Chi-Keung .
COMPUTER VISION, ECCV 2022, PT XIX, 2022, 13679 :701-719
[7]  
Garnelo M, 2018, PR MACH LEARN RES, V80
[8]  
Guo CA, 2017, PR MACH LEARN RES, V70
[9]   H2Former: An Efficient Hierarchical Hybrid Transformer for Medical Image Segmentation [J].
He, Along ;
Wang, Kai ;
Li, Tao ;
Du, Chengkun ;
Xia, Shuang ;
Fu, Huazhu .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (09) :2763-2775
[10]  
Hoffman MD, 2013, J MACH LEARN RES, V14, P1303