Robust Learning Against Label Noise Based on Activation Trend Tracking

被引:1
作者
Wang, Yilin [1 ]
Zhang, Yulong [1 ]
Jiang, Zhiqiang [2 ]
Zheng, Li [1 ,2 ]
Chen, Jinshui [1 ]
Lu, Jiangang [1 ,3 ]
机构
[1] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou 310027, Peoples R China
[2] Zhongce Rubber Grp Co Ltd, Hangzhou 310018, Peoples R China
[3] Res Ctr Intelligent Chips & Devices, Zhejiang Lab, Hangzhou 311100, Peoples R China
关键词
Activation analysis; label noise; model robustness; robust loss function; tire-defect detection;
D O I
10.1109/TIM.2022.3219493
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Defect detection of industrial products plays an important role in industrial production. Deep learning methods have been introduced to replace manual labor and save costs. Training deep neural networks (DNNs) requires large datasets with precisely annotated labels. However, it is challenging to obtain such clean datasets, especially in industrial scenes. For instance, in tire industry, classification of tire defects is ambiguous and thus labeling a big and clean dataset for defect detection is almost impossible. Therefore, training robust models with datasets containing noisy labels has received increasing attention. Different from existing methods, we conduct research on the learning process of such datasets from a new perspective, by analyzing the channel-wise activation intensity of training samples. Deep models exhibit a two-phase learning process when training with noisy datasets, which is quite different from training with clean labels. This motivates us to propose a novel method called activation trend tracking-based robust learning (ATTRL). During the training process, activation intensity is monitored and used to adjust the loss function to avoid overfitting on noisy labels. To illustrate the effectiveness of the proposed method, empirical analysis and robustness verification are presented through extensive experiments. Experimental results on synthetic and tire datasets demonstrate that our approach is tolerant to high proportions of noisy labels and is superior to other existing methods, exhibiting a good prospect for industrial application.
引用
收藏
页数:12
相关论文
共 46 条
[31]   Learning From Noisy Labels With Deep Neural Networks: A Survey [J].
Song, Hwanjun ;
Kim, Minseok ;
Park, Dongmin ;
Shin, Yooju ;
Lee, Jae-Gil .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) :8135-8153
[32]  
Srivastava N, 2014, J MACH LEARN RES, V15, P1929
[33]   Co-learning: Learning from Noisy Labels with Self-supervision [J].
Tan, Cheng ;
Xia, Jun ;
Wu, Lirong ;
Li, Stan Z. .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :1405-1413
[34]   YFCC100M: The New Data in Multimedia Research [J].
Thomee, Bart ;
Elizalde, Benjamin ;
Shamma, David A. ;
Ni, Karl ;
Friedland, Gerald ;
Poland, Douglas ;
Borth, Damian ;
Li, Li-Jia .
COMMUNICATIONS OF THE ACM, 2016, 59 (02) :64-73
[35]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
[36]   Symmetric Cross Entropy for Robust Learning with Noisy Labels [J].
Wang, Yisen ;
Ma, Xingjun ;
Chen, Zaiyi ;
Luo, Yuan ;
Yi, Jinfeng ;
Bailey, James .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :322-330
[37]  
Wu YH, 2016, Arxiv, DOI arXiv:1609.08144
[38]  
Xia X, 2020, P INT C LEARN REPR, P1
[39]  
Yao Y., 2020, Adv. Neural Inf. Process. Syst., V33, P7260
[40]   Jo-SRC: A Contrastive Approach for Combating Noisy Labels [J].
Yao, Yazhou ;
Sun, Zeren ;
Zhang, Chuanyi ;
Shen, Fumin ;
Wu, Qi ;
Zhang, Jian ;
Tang, Zhenmin .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :5188-5197