Distilling from professors: Enhancing the knowledge distillation of teachers

被引:16
作者
Bang, Duhyeon [2 ]
Lee, Jongwuk [3 ]
Shim, Hyunjung [1 ]
机构
[1] SK Telecom, SK T Tower,65 Eulji Ro, Seoul, South Korea
[2] Sungkyunkwan Univ, Dept Software, 2066 Seobu Ro, Suwon, Gyeonggi Do, South Korea
[3] Yonsei Univ, Sch Integrated Technol, 85 Songdogwakak Ro, Incheon, South Korea
关键词
Knowledge distillation; Professor model; Conditional adversarial autoencoder;
D O I
10.1016/j.ins.2021.08.020
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Knowledge distillation (KD) is a successful technique for transferring knowledge from one machine learning model to another model. Specifically, the idea of KD has been widely used for various tasks such as model compression and knowledge transfer between different models. However, existing studies in KD have overlooked the possibility that dark knowledge (i.e., soft targets) obtained from a complex and large model (a.k.a., a teacher model) may be either incorrect or insufficient. Such knowledge can hinder the effective learning of another small model (a.k.a., a student model). In this paper, we propose the professor model, which refines the soft target from the teacher model to improve KD. The professor model aims to achieve two goals; 1) improving the prediction accuracy and 2) capturing the inter-class correlation of the soft target from the teacher model. We first design the professor model by reformulating a conditional adversarial autoencoder (CAAE). Then, we devise two KD strategies using both teacher and professor models. Our empirical study demonstrates that the professor model effectively improves KD in three benchmark datasets: CIFAR100, TinyImagenet, and ILSVRC2015. Moreover, our comprehensive analysis shows that the professor model is much more effective than employing the stronger teacher model, in which parameters are greater than the sum of the teacher's and professor's parameters. Since the proposed model is model-agnostic, our model can be combined with any KD algorithm and consistently improves various KD techniques. (c) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页码:743 / 755
页数:13
相关论文
共 35 条
[11]  
Goodfellow I., 2020, ADV NEUR IN, V63, P139, DOI [DOI 10.1145/3422622, 10.1145/3422622]
[12]   Identity Mappings in Deep Residual Networks [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :630-645
[13]  
He Kaiming, 2015, C COMP VIS PATT REC
[14]  
Heo B, 2019, AAAI CONF ARTIF INTE, P3779
[15]  
Hinton G., 2014, ARXIV150302531
[16]  
Hubara I, 2018, J MACH LEARN RES, V18
[17]  
Kimura A., 2018, BMVC
[18]  
Kingma J., 2015, P 3 INT C LEARN REPR, P1
[19]  
Krizhevsky Alex, 2009, TR2009
[20]  
Le L, 2018, ADV NEUR IN, V31