Learning Noisy Few-Shot Classification Without Relying on Pseudo-Noise Data

被引:0
作者
Wu, Yixin [1 ,2 ]
Xue, Hui [1 ,2 ]
An, Yuexuan [1 ,2 ]
Fang, Pengfei [1 ,2 ]
机构
[1] Southeast Univ, Sch Comp Sci & Engn, Nanjing 210096, Peoples R China
[2] Southeast Univ, Key Lab New Generat Artificial Intelligence Techno, Minist Educ, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Noise measurement; Training; Prototypes; Smoothing methods; Noise; Adaptation models; Robustness; Labeling; Feature extraction; Accuracy; Few-shot learning; noisy labels; noisy few-shot learning; model robustness;
D O I
10.1109/LSP.2024.3496584
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recently, noisy few-shot learning (NFSL) has been exploring the model robustness to label noise, breaking the limitation of completely accurate labeling in small sample scenarios. Existing NFSL methods directly employ delicately designed pseudo-noise to simulate and adapt to noisy environments. However, determining the optimal combination of pseudo-noise is challenging and improperly configuring pseudo-noise may lead to adverse effects on the training models. To deal with the problems, this letter proposes a novel Adaptive MultI-view Denoising Evaluation (AMIDE) framework, which establishes an adaptive and robust embedding and classifier without relying on pseudo-noise. In the training phase, we design an adaptive label smoothing scheme, where soft labels with learnable smooth coefficients are inferred from data distribution to mitigate overconfident labeling. In the testing stage, we propose a multi-view fused evaluation scheme, where different network layers are treated as distinct views to generate potential clean features and modify prototypes, thereby enhancing the accuracy of evaluation. In this way, the impact of noise is effectively alleviated from two perspectives. Extensive experiments on several few-shot classification benchmarks show the superiority and robustness of our method.
引用
收藏
页码:86 / 90
页数:5
相关论文
共 31 条
  • [1] Arpit D, 2017, PR MACH LEARN RES, V70
  • [2] Bendre Nihar, 2020, arXiv
  • [3] Bengio Y., 2011, P 14 INT C ART INT S, P315, DOI DOI 10.1002/ECS2.1832
  • [4] Bertinetto L, 2018, ARXIV180508136
  • [5] Bhojanapalli S, 2020, INT C MACH LEARN, P6448, DOI DOI 10.48550/ARXIV.2003.02819
  • [6] Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
    Chen, Yinbo
    Liu, Zhuang
    Xu, Huijuan
    Darrell, Trevor
    Wang, Xiaolong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9042 - 9051
  • [7] Finn C, 2017, PR MACH LEARN RES, V70
  • [8] Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
    Han, Bo
    Yao, Quanming
    Yu, Xingrui
    Niu, Gang
    Xu, Miao
    Hu, Weihua
    Tsang, Ivor W.
    Sugiyama, Masashi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [9] Bag of Tricks for Image Classification with Convolutional Neural Networks
    He, Tong
    Zhang, Zhi
    Zhang, Hang
    Zhang, Zhongyue
    Xie, Junyuan
    Li, Mu
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 558 - 567
  • [10] Hochreiter S., 1997, Neural Computation, V9, P1735, DOI [DOI 10.1162/NECO.1997.9.8.1735,9377276, 10.1162/neco.1997.9.8.1735, DOI 10.1162/NECO.1997.9.8.1735]