Knowledge Distillation Performs Partial Variance Reduction

被引:0
作者
Safaryan, Mher [1 ]
Peste, Alexandra [1 ]
Alistarh, Dan [1 ]
机构
[1] IST Austria, Klosterneuburg, Austria
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge distillation is a popular approach for enhancing the performance of "student" models, with lower representational capacity, by taking advantage of more powerful "teacher" models. Despite its apparent simplicity and widespread use, the underlying mechanics behind knowledge distillation (KD) are still not fully understood. In this work, we shed new light on the inner workings of this method, by examining it from an optimization perspective. We show that, in the context of linear and deep linear models, KD can be interpreted as a novel type of stochastic variance reduction mechanism. We provide a detailed convergence analysis of the resulting dynamics, which hold under standard assumptions for both strongly-convex and non-convex losses, showing that KD acts as a form of partial variance reduction, which can reduce the stochastic gradient noise, but may not eliminate it completely, depending on the properties of the "teacher" model. Our analysis puts further emphasis on the need for careful parametrization of KD, in particular w.r.t. the weighting of the distillation loss, and is validated empirically on both linear models and deep neural networks.
引用
收藏
页数:30
相关论文
共 61 条
  • [1] Alistarh D, 2017, ADV NEUR IN, V30
  • [2] [Anonymous], 1963, Zhurnal vychislitel'noi matematiki i matematicheskoi fiziki, DOI DOI 10.1016/0041-5553(63)90382-3
  • [3] [Anonymous], 2013, Introductory lectures on convex optimization: A basic course
  • [4] Ba Lei Jimmy, 2013, ARXIV
  • [5] Bucila Cristian, 2006, KDD, P535
  • [6] Cutkosky Ashok, 2019, Advances in neural information processing systems, V32
  • [7] Czarnecki Wojciech, 2018, P 35 INT C MACH LEAR
  • [8] Dao Tri, 2021, INT C LEARN REPR ICL
  • [9] Galashov A, 2019, INT C LEARN REPR
  • [10] Garrigos G., 2023, ARXIV