Dynamic Knowledge Distillation for Pre-trained Language Models

被引:0
作者
Li, Lei [1 ]
Lin, Yankai [2 ]
Ren, Shuhuai [1 ]
Li, Peng [2 ]
Zhou, Jie [2 ]
Sun, Xu [1 ]
机构
[1] Peking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China
[2] Tencent Inc, Pattern Recognit Ctr, WeChat AI, Shenzhen, Peoples R China
来源
2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021) | 2021年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge distillation (KD) has been proved effective for compressing large-scale pre-trained language models. However, existing methods conduct KD statically, e.g., the student model aligns its output distribution to that of a selected teacher model on the pre-defined training dataset. In this paper, we explore whether a dynamic knowledge distillation that empowers the student to adjust the learning procedure according to its competency, regarding the student performance and learning efficiency. We explore the dynamical adjustments on three aspects: teacher model adoption, data selection, and KD objective adaptation. Experimental results show that (1) proper selection of teacher model can boost the performance of student model; (2) conducting KD with 10% informative instances achieves comparable performance while greatly accelerates the training; (3) the student performance can be boosted by adjusting the supervision contribution of different alignment objective. We find dynamic knowledge distillation is promising and provide discussions on potential future directions towards more efficient KD methods.(1)
引用
收藏
页码:379 / 389
页数:11
相关论文
共 38 条
[1]  
[Anonymous], 2001, INT C MACH LEARN
[2]  
Bentivogli L., 2009, P TEXT ANAL C
[3]  
Desai S, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), P295
[4]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[5]  
Dolan W. B., 2005, 3 INT WORKSHOP PARAP
[6]   Selective sampling using the query by committee algorithm [J].
Freund, Y ;
Seung, HS ;
Shamir, E ;
Tishby, N .
MACHINE LEARNING, 1997, 28 (2-3) :133-168
[7]  
Gal Y, 2016, PR MACH LEARN RES, V48
[8]  
Guo CA, 2017, PR MACH LEARN RES, V70
[9]  
Hinton G, 2015, Arxiv, DOI arXiv:1503.02531
[10]  
Jiao XQ, 2020, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, P4163