Few-shot partial multi-label learning with credible non-candidate label

被引:0
作者
Wang, Meng [1 ,2 ]
Zhao, Yunfeng [1 ,2 ]
Yan, Zhongmin [1 ,2 ]
Zhang, Jinglin [3 ]
Wang, Jun [1 ,2 ]
Yu, Guoxian [1 ,2 ]
机构
[1] Shandong Univ, Sch Software, Jinan, Peoples R China
[2] Shandong Univ, Joint SDU NTU Ctr Artificial Intelligence Res, Jinan, Peoples R China
[3] Shandong Univ, Sch Control Sci & Engn, Jinan, Peoples R China
关键词
Partial multi-label learning; Few-shot learning; Credible non-candidate label; Data augmentation;
D O I
10.1016/j.ins.2025.122485
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Partial multi-label learning (PML) addresses scenarios where each training sample is associated with multiple candidate labels, but only a subset are ground-truth labels. The primary difficulty in PML is to mitigate the negative impact of noisy labels. Most existing PML methods rely on sufficient samples to train a noise-robust multi-label classifier. However, in practical scenarios, such as privacy-sensitive domains or those with limited data, only a few training samples are typically available for the target task. In this paper, we propose an approach called FsPML-CNL (Few-shot Partial Multi-label Learning with Credible Non-candidate Label) to tackle the PML problem with few-shot training samples. Specifically, FsPML-CNL first utilizes the sample features and feature-prototype similarity in the embedding space to disambiguate candidate labels and to obtain label prototypes. Then, the credible non-candidate label is selected based on label correlation and confidence, and its prototype is incorporated into the training samples to generate new data for boosting supervised information. The noise-tolerant multi-label classifier is finally induced with the original and generated samples, along with the confidence-guided loss. Extensive experiments on public datasets demonstrate that FsPML-CNL outperforms competitive baselines across different settings.
引用
收藏
页数:21
相关论文
共 42 条
[1]   LaSO: Label-Set Operations networks for multi-label few-shot learning [J].
Alfassy, Amit ;
Karlinsky, Leonid ;
Aides, Amit ;
Shtok, Joseph ;
Harary, Sivan ;
Feris, Rogerio ;
Giryes, Raja ;
Bronstein, Alex M. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :6541-6550
[2]  
An Yuexuan, 2024, IEEE Trans. Neural Netw. Learn. Syst., V1, P1
[3]  
[Anonymous], 2009, P ACM INT C IM VID R
[4]   Heterogeneous Semantic Transfer for Multi-label Recognition with Partial Labels [J].
Chen, Tianshui ;
Pu, Tao ;
Liu, Lingbo ;
Shi, Yukai ;
Yang, Zhijing ;
Lin, Liang .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (12) :6091-6106
[5]   Knowledge-Guided Multi-Label Few-Shot Learning for General Image Recognition [J].
Chen, Tianshui ;
Lin, Liang ;
Chen, Riquan ;
Hui, Xiaolu ;
Wu, Hefeng .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (03) :1371-1384
[6]   LongReMix: Robust learning with high confidence samples in a noisy label environment [J].
Cordeiro, Filipe R. ;
Sachdeva, Ragav ;
Belagiannis, Vasileios ;
Reid, Ian ;
Carneiro, Gustavo .
PATTERN RECOGNITION, 2023, 133
[7]  
Cour T, 2011, J MACH LEARN RES, V12, P1501
[8]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[9]  
Dietterich TG, 1995, SFI S SCI C, V20, P395
[10]  
Fang JP, 2019, AAAI CONF ARTIF INTE, P3518