A Virtual Knowledge Distillation via Conditional GAN

被引:6
|
作者
Kim, Sihwan [1 ]
机构
[1] Hana Inst Technol, Big Data & AI Lab, Seoul 06133, South Korea
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Training; Generators; Knowledge engineering; Bridges; Generative adversarial networks; Task analysis; Collaborative work; Image classification; model compression; knowledge distillation; self-knowledge distillation; collaborative learning; conditional generative adversarial network;
D O I
10.1109/ACCESS.2022.3163398
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Knowledge distillation aims at transferring the knowledge from a pre-trained complex model, called teacher, to a relatively smaller and faster one, called student. Unlike previous works that transfer the teacher's softened distributions or feature spaces, in this paper, we propose a novel approach, called Virtual Knowledge Distillation (VKD), that transfers a softened distribution generated by a virtual knowledge generator conditioned on class label. A virtual knowledge generator is trained independently, but concurrently with a teacher, to mimic the teacher's softened distributions. Afterwards, when training a student, virtual knowledge generator can be exploited instead of the teacher's softened distributions or combined with the existing distillation methods in a straightforward manner. Moreover, with slight modifications, VKD can be utilized not only for the self-knowledge distillation method but also for the collaborative learning method. We compare our method with several representative distillation methods in various combinations of teacher and student architectures on the image classification tasks. Experimental results on various image classification tasks demonstrate that VKD show a competitive performance compared to the conventional distillation methods, and when combined with them, the performance is improved with a substantial margin.
引用
收藏
页码:34766 / 34778
页数:13
相关论文
共 50 条
  • [1] Improving Knowledge Distillation via Head and Tail Categories
    Xu, Liuchi
    Ren, Jin
    Huang, Zhenhua
    Zheng, Weishi
    Chen, Yunwen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3465 - 3480
  • [2] Few-Shot Face Stylization via GAN Prior Distillation
    Zhao, Ruoyu
    Zhu, Mingrui
    Wang, Nannan
    Gao, Xinbo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 12
  • [3] D3K: Dynastic Data-Free Knowledge Distillation
    Li, Xiufang
    Sun, Qigong
    Jiao, Licheng
    Liu, Fang
    Liu, Xu
    Li, Lingling
    Chen, Puhua
    Zuo, Yi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8358 - 8371
  • [4] Deep Generative Knowledge Distillation by Likelihood Finetuning
    Li, Jingru
    Chen, Xiaofeng
    Zheng, Peiyu
    Wang, Qiang
    Yu, Zhi
    IEEE ACCESS, 2023, 11 : 46441 - 46453
  • [5] Knowledge Distillation via Multi-Teacher Feature Ensemble
    Ye, Xin
    Jiang, Rongxin
    Tian, Xiang
    Zhang, Rui
    Chen, Yaowu
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 566 - 570
  • [6] Efficient Person Search via Expert-Guided Knowledge Distillation
    Zhang, Yaqing
    Li, Xi
    Zhang, Zhongfei
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (10) : 5093 - 5104
  • [7] Multispectral Scene Classification via Cross-Modal Knowledge Distillation
    Liu, Hao
    Qu, Ying
    Zhang, Liqiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [8] A Category-Aware Curriculum Learning for Data-Free Knowledge Distillation
    Li, Xiufang
    Jiao, Licheng
    Sun, Qigong
    Liu, Fang
    Liu, Xu
    Li, Lingling
    Chen, Puhua
    Yang, Shuyuan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9603 - 9618
  • [9] Multi-Label Clinical Time-Series Generation via Conditional GAN
    Lu, Chang
    Reddy, Chandan K.
    Wang, Ping
    Nie, Dong
    Ning, Yue
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) : 1728 - 1740
  • [10] Conditional generative data-free knowledge distillation
    Yu, Xinyi
    Yan, Ling
    Yang, Yang
    Zhou, Libo
    Ou, Linlin
    IMAGE AND VISION COMPUTING, 2023, 131