How to Train the Teacher Model for Effective Knowledge Distillation

被引:0
作者
Hamidi, Shayan Mohajer [1 ]
Deng, Xizhen [2 ]
Tan, Renhao [1 ]
Ye, Linfeng [1 ]
Salamah, Ahmed Hussein [1 ]
机构
[1] Univ Waterloo, Waterloo, ON N2L 3G1, Canada
[2] Univ Michigan, Ann Arbor, MI 48109 USA
来源
COMPUTER VISION - ECCV 2024, PT LXXXIX | 2025年 / 15147卷
关键词
Knowledge distillation; Bayes conditional probability density; Mean squared error;
D O I
10.1007/978-3-031-73024-5_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, it was shown that the role of the teacher in knowledge distillation (KD) is to provide the student with an estimate of the true Bayes conditional probability density (BCPD). Notably, the new findings propose that the student's error rate can be upper-bounded by the mean squared error (MSE) between the teacher's output and BCPD. Consequently, to enhance KD efficacy, the teacher should be trained such that its output is close to BCPD in MSE sense. This paper elucidates that training the teacher model with MSE loss equates to minimizing the MSE between its output and BCPD, aligning with its core responsibility of providing the student with a BCPD estimate closely resembling it in MSE terms. In this respect, through a comprehensive set of experiments, we demonstrate that substituting the conventional teacher trained with cross-entropy loss with one trained using MSE loss in state-of-the-art KD methods consistently boosts the student's accuracy, resulting in improvements of up to 2.6%. The code for this paper is publicly available at: https://github.com/ECCV2024MSE/ECCV_MSE_Teacher.
引用
收藏
页码:1 / 18
页数:18
相关论文
共 56 条
  • [1] Ahn S., 2019, Variational information distillation for knowledge transfer, P9155, DOI DOI 10.1109/CVPR.2019.00938
  • [2] Allen-Zhu Z, 2023, arXiv, DOI arXiv:2012.09816
  • [3] Anil R, 2020, Arxiv, DOI arXiv:1804.03235
  • [4] Knowledge distillation: A good teacher is patient and consistent
    Beyer, Lucas
    Zhai, Xiaohua
    Royer, Amelie
    Markeeva, Larisa
    Anil, Rohan
    Kolesnikov, Alexander
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10915 - 10924
  • [5] Bucila C., 2006, P ACM SIGKDD INT C K, P535
  • [6] Chen DF, 2020, AAAI CONF ARTIF INTE, V34, P3430
  • [7] Distilling Knowledge via Knowledge Review
    Chen, Pengguang
    Liu, Shu
    Zhao, Hengshuang
    Jia, Jiaya
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5006 - 5015
  • [8] Chen T., 2020, Adv. Neural Inf. Process. Syst., V33, P22243
  • [9] Chi Z., LNCS, V12372, P107
  • [10] Chi Z., 2024, 12 INT C LEARN REPR