Uncertainty-Aware Self-Knowledge Distillation

被引:0
作者
Yang, Yang [1 ]
Wang, Chao [1 ]
Gong, Lei [1 ]
Wu, Min [2 ]
Chen, Zhenghua [2 ]
Gao, Yingxue [1 ]
Wang, Teng [1 ]
Zhou, Xuehai [1 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Peoples R China
[2] ASTAR, Inst Infocomm Res I2R, Ctr Frontier AI Res CFAR, Singapore 138632, Singapore
基金
中国国家自然科学基金;
关键词
Uncertainty; Calibration; Accuracy; Vectors; Training; Predictive models; Smoothing methods; Artificial neural networks; Object detection; Circuits and systems; Uncertainty quantification; self-knowledge distillation; contrastive learning; image recognition;
D O I
10.1109/TCSVT.2024.3516145
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Self-knowledge distillation has emerged as a powerful method, notably boosting the prediction accuracy of deep neural networks while being resource-efficient, setting it apart from traditional teacher-student knowledge distillation approaches. However, in safety-critical applications, high accuracy alone is not adequate; conveying uncertainty effectively holds equal importance. Regrettably, existing self-knowledge distillation methods have not met the need to improve both prediction accuracy and uncertainty quantification simultaneously. In response to this gap, we present an uncertainty-aware self-knowledge distillation method named UASKD. UASKD introduces an uncertainty-aware contrastive loss and a prediction synthesis technique within the self-knowledge distillation process, aiming to fully harness the potential of self-knowledge distillation for improving both prediction accuracy and uncertainty quantification. Extensive assessments illustrate that UASKD consistently surpasses other self-knowledge distillation techniques and numerous uncertainty calibration methods in both prediction accuracy and uncertainty quantification metrics across various classification and object detection tasks, highlighting its efficacy and adaptability.
引用
收藏
页码:4464 / 4478
页数:15
相关论文
共 69 条
  • [1] Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
  • [2] DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving
    Chen, Chenyi
    Seff, Ari
    Kornhauser, Alain
    Xiao, Jianxiong
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2722 - 2730
  • [3] SD-FSOD: Self-Distillation Paradigm via Distribution Calibration for Few-Shot Object Detection
    Chen, Han
    Wang, Qi
    Xie, Kailin
    Lei, Liang
    Lin, Matthieu Gaetan
    Lv, Tian
    Liu, Yongjin
    Luo, Jiebo
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5963 - 5976
  • [4] Chen K, 2019, Arxiv, DOI arXiv:1906.07155
  • [5] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [6] DOSOVITSKIY A, 2021, 9 INT C LEARN REPR
  • [7] The Pascal Visual Object Classes (VOC) Challenge
    Everingham, Mark
    Van Gool, Luc
    Williams, Christopher K. I.
    Winn, John
    Zisserman, Andrew
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) : 303 - 338
  • [8] Learning Contrastive Self-Distillation for Ultra-Fine-Grained Visual Categorization Targeting Limited Samples
    Fang, Ziye
    Jiang, Xin
    Tang, Hao
    Li, Zechao
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7135 - 7148
  • [9] Gal Y, 2016, PR MACH LEARN RES, V48
  • [10] Guo CA, 2017, PR MACH LEARN RES, V70