Similarity-Preserving Knowledge Distillation

被引:697
作者
Tung, Frederick [1 ,2 ]
Mori, Greg [1 ,2 ]
机构
[1] Simon Fraser Univ, Burnaby, BC, Canada
[2] Borealis AI, Toronto, ON, Canada
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
关键词
D O I
10.1109/ICCV.2019.00145
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge distillation is a widely applicable technique for training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.
引用
收藏
页码:1365 / 1374
页数:10
相关论文
共 46 条
  • [1] [Anonymous], 2017, INT C LEARN REPR
  • [2] Ashok A., 2018, INT C LEARN REPR
  • [3] Chen Changan, 2018, EUR C COMP VIS
  • [4] Describing Textures in the Wild
    Cimpoi, Mircea
    Maji, Subhransu
    Kokkinos, Iasonas
    Mohamed, Sammy
    Vedaldi, Andrea
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 3606 - 3613
  • [5] Crowley J., 2018, ADV NEUR IN, P2893
  • [6] Darlow Luke N, 2018, ARXIV181003505
  • [7] Denton Emily L, 2014, ADV NEURAL INFORM PR, P1269
  • [8] Pairwise Confusion for Fine-Grained Visual Classification
    Dubey, Abhimanyu
    Gupta, Otkrist
    Guo, Pei
    Raskar, Ramesh
    Farrell, Ryan
    Naik, Nikhil
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 71 - 88
  • [9] SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
    Faraone, Julian
    Fraser, Nicholas
    Blott, Michaela
    Leong, Philip H. W.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4300 - 4309
  • [10] Fromm Josh, 2018, ADV NEURAL INFORM PR