A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning

被引:1045
作者
Yim, Junho [1 ]
Joo, Donggyu [1 ]
Bae, Jihoon [2 ]
Kim, Junmo [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon, South Korea
[2] Elect & Telecommun Res Inst, Daejeon, South Korea
来源
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) | 2017年
关键词
D O I
10.1109/CVPR.2017.754
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN maps from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model; (2) the student DNN outperforms the original DNN; and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.
引用
收藏
页码:7130 / 7138
页数:9
相关论文
共 28 条
  • [1] [Anonymous], 2016, ARXIV160305027
  • [2] [Anonymous], 2011, TECHNICAL REPORT
  • [3] [Anonymous], ARXIV151105641
  • [4] [Anonymous], 2015, arXiv
  • [5] [Anonymous], Simple baseline for visual question answering
  • [6] [Anonymous], 2016, BMVC
  • [7] [Anonymous], 2013, ARXIV13126120
  • [8] [Anonymous], 2015, ARXIV151105756
  • [9] [Anonymous], 2014, BRIT C MACH VIS
  • [10] [Anonymous], 2016, ARXIV160506431