Towards harnessing feature embedding for robust learning with noisy labels

被引:4
作者
Zhang, Chuang [1 ]
Shen, Li [2 ]
Yang, Jian [3 ]
Gong, Chen [1 ,4 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, PCA Lab,Minist Educ, Key Lab Intelligent Percept & Syst High Dimens In, Nanjing, Peoples R China
[2] JD Explore Acad, Beijing, Peoples R China
[3] Nankai Univ, Coll Comp Sci, Tianjin, Peoples R China
[4] Jiangsu Key Lab Image & Video Understanding Socia, Nanjing, Peoples R China
关键词
Deep learning; Robust learning; Classification; Label noise;
D O I
10.1007/s10994-022-06197-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The memorization effect of deep neural networks (DNNs) plays a pivotal role in recent label noise learning methods. To exploit this effect, the model prediction-based methods have been widely adopted, which aim to exploit the outputs of DNNs in the early stage of learning to correct noisy labels. However, we observe that the model will make mistakes during label prediction, resulting in unsatisfactory performance. By contrast, the produced features in the early stage of learning show better robustness. Inspired by this observation, in this paper, we propose a novel feature embedding-based method for deep learning with label noise, termed LabElNoiseDilution (LEND). To be specific, we first compute a similarity matrix based on current embedded features to capture the local structure of training data. Then, the noisy supervision signals carried by mislabeled data are overwhelmed by nearby correctly labeled ones (i.e., label noise dilution), of which the effectiveness is guaranteed by the inherent robustness of feature embedding. Finally, the training data with diluted labels are further used to train a robust classifier. Empirically, we conduct extensive experiments on both synthetic and real-world noisy datasets by comparing our LEND with several representative robust learning approaches. The results verify the effectiveness of our LEND.
引用
收藏
页码:3181 / 3201
页数:21
相关论文
共 50 条
[21]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[22]   The Open Images Dataset V4 Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale [J].
Kuznetsova, Alina ;
Rom, Hassan ;
Alldrin, Neil ;
Uijlings, Jasper ;
Krasin, Ivan ;
Pont-Tuset, Jordi ;
Kamali, Shahab ;
Popov, Stefan ;
Malloci, Matteo ;
Kolesnikov, Alexander ;
Duerig, Tom ;
Ferrari, Vittorio .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (07) :1956-1981
[23]  
Li J., 2020, 8 INT C LEARNING REP
[24]   EAC-Net: A Region-based Deep Enhancing and Cropping Approach for Facial Action Unit Detection [J].
Li, Wei ;
Abtahi, Farnaz ;
Zhu, Zhigang ;
Yin, Lijun .
2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, :103-110
[25]  
Li XF, 2021, PR MACH LEARN RES, V139
[26]  
Lu J, 2018, PR MACH LEARN RES, V80
[27]   Exploring the Limits of Weakly Supervised Pretraining [J].
Mahajan, Dhruv ;
Girshick, Ross ;
Ramanathan, Vignesh ;
He, Kaiming ;
Paluri, Manohar ;
Li, Yixuan ;
Bharambe, Ashwin ;
van der Maaten, Laurens .
COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 :185-201
[28]  
Natarajan N., 2013, Advances in neural information processing systems, P1196
[29]  
Nguyen D. T., 2019, INT C LEARN REPR
[30]  
Patrini G, 2016, PR MACH LEARN RES, V48