Me-Momentum: Extracting Hard Confident Examples from Noisily Labeled Data

被引:22
作者
Bai, Yingbin [1 ]
Liu, Tongliang [1 ]
机构
[1] Univ Sydney, Trustworthy Machine Learning Lab, Sydney, NSW, Australia
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
基金
澳大利亚研究理事会;
关键词
D O I
10.1109/ICCV48922.2021.00918
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Examples that are close to the decision boundary-that we term hard examples, are essential to shape accurate classifiers. Extracting confident examples has been widely studied in the community of learning with noisy labels. However, it remains elusive how to extract hard confident examples from the noisy training data. In this paper, we propose a deep learning paradigm to solve this problem, which is built on the memorization effect of deep neural networks that they would first learn simple patterns, i.e., which are defined by these shared by multiple training examples. To extract hard confident examples that contain non-simple patterns and are entangled with the inaccurately labeled examples, we borrow the idea of momentum from physics. Specifically, we alternately update the confident examples and refine the classifier. Note that the extracted confident examples in the previous round can be exploited to learn a better classifier and that the better classifier will help identify better (and hard) confident examples. We call the approach the "Momentum of Memorization" (Me-Momentum). Empirical results on benchmark-simulated and real-world label-noise data illustrate the effectiveness of Me-Momentum for extracting hard confident examples, leading to better classification performance.
引用
收藏
页码:9292 / 9301
页数:10
相关论文
共 57 条
  • [1] Angluin D., 1988, Machine Learning, V2, P343, DOI 10.1007/BF00116829
  • [2] [Anonymous], 2020, ICML
  • [3] [Anonymous], 2018, ICML
  • [4] [Anonymous], NeurIPS
  • [5] [Anonymous], 2008, P 25 INT C MACH LEAR, DOI DOI 10.1145/1390156.1390190
  • [6] [Anonymous], 2017, ICML
  • [7] [Anonymous], 2018, AISTATS
  • [8] [Anonymous], 2017, NEURIPS
  • [9] [Anonymous], 2018, NeurIPS
  • [10] [Anonymous], 2019, ICML