Hierarchical Noise-Tolerant Meta-Learning With Noisy Labels

被引:0
作者
Liu, Yahui [1 ,2 ]
Wang, Jian [1 ,2 ]
Yang, Yuntai [1 ,2 ]
Wang, Renlong [1 ,2 ]
Wang, Simiao [3 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650504, Peoples R China
[2] Kunming Univ Sci & Technol, Yunnan Key Lab Artificial Intelligence, Kunming 650504, Peoples R China
[3] Dalian Maritime Univ, Coll Artificial Intelligence, Dalian 116026, Peoples R China
基金
中国国家自然科学基金;
关键词
Noise measurement; Optimization; Noise; Feature extraction; Robustness; Metalearning; Cams; Training; Predictive models; Accuracy; Bi-level optimization; class activation maps; hierarchical noise-tolerant meta-learning (HNML); noisy labels;
D O I
10.1109/LSP.2024.3480033
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the detrimental impact of noisy labels on the generalization of deep neural networks, learning with noisy labels has become an important task in modern deep learning applications. Many previous efforts have mitigated this problem by either removing noisy samples or correcting labels. In this letter, we address this issue from a new perspective and empirically find that models trained with both clean and mislabeled samples exhibit distinguishable activation feature distributions. Building on this observation, we propose a novel meta-learning approach called the Hierarchical Noise-tolerant Meta-Learning (HNML) method, which involves a bi-level optimization comprising meta-training and meta-testing. In the meta-training stage, we incorporate consistency loss at the output prediction hierarchy to facilitate model adaptation to dynamically changing label noise. In the meta-testing stage, we extract activation feature distributions using class activation maps and propose a new mask-guided self-learning method to correct biases in the foreground regions. Through the bi-level optimization of HNML, we ensure that the model generates discriminative feature representations that are insensitive to noisy labels. When evaluated on both synthetic and real-world noisy datasets, our HNML method achieves significant improvements over previous state-of-the-art methods.
引用
收藏
页码:3020 / 3024
页数:5
相关论文
共 32 条
[1]   MetaLabelNet: Learning to Generate Soft-Labels From Noisy-Labels [J].
Algan, Gorkem ;
Ulusoy, Ilkay .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :4352-4362
[2]   Medical federated learning with joint graph purification for noisy label learning [J].
Chen, Zhen ;
Li, Wuyang ;
Xing, Xiaohan ;
Yuan, Yixuan .
MEDICAL IMAGE ANALYSIS, 2023, 90
[3]  
Guo Hui, 2023, ADV NEURAL INFORM PR
[4]   Identity Mappings in Deep Residual Networks [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :630-645
[5]   Twin Contrastive Learning with Noisy Labels [J].
Huang, Zhizhong ;
Zhang, Junping ;
Shan, Hongming .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :11661-11670
[6]   Prototypical Knowledge Distillation for Noise Robust Keyword Spotting [J].
Kim, Donghyeon ;
Kim, Gwantae ;
Lee, Bokyeung ;
Ko, Hanseok .
IEEE SIGNAL PROCESSING LETTERS, 2022, 29 :2298-2302
[7]   Domain generalization on medical imaging classification using episodic training with task augmentation [J].
Li, Chenxin ;
Lin, Xin ;
Mao, Yijin ;
Lin, Wei ;
Qi, Qi ;
Ding, Xinghao ;
Huang, Yue ;
Liang, Dong ;
Yu, Yizhou .
COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 141
[8]  
Li J., 2020, P INT C LEARN REPR
[9]   Learning to Learn from Noisy Labeled Data [J].
Li, Junnan ;
Wong, Yongkang ;
Zhao, Qi ;
Kankanhalli, Mohan S. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5046-5054
[10]  
Li SK, 2022, ADV NEUR IN