A Virtual Knowledge Distillation via Conditional GAN

被引:6
|
作者
Kim, Sihwan [1 ]
机构
[1] Hana Inst Technol, Big Data & AI Lab, Seoul 06133, South Korea
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Training; Generators; Knowledge engineering; Bridges; Generative adversarial networks; Task analysis; Collaborative work; Image classification; model compression; knowledge distillation; self-knowledge distillation; collaborative learning; conditional generative adversarial network;
D O I
10.1109/ACCESS.2022.3163398
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Knowledge distillation aims at transferring the knowledge from a pre-trained complex model, called teacher, to a relatively smaller and faster one, called student. Unlike previous works that transfer the teacher's softened distributions or feature spaces, in this paper, we propose a novel approach, called Virtual Knowledge Distillation (VKD), that transfers a softened distribution generated by a virtual knowledge generator conditioned on class label. A virtual knowledge generator is trained independently, but concurrently with a teacher, to mimic the teacher's softened distributions. Afterwards, when training a student, virtual knowledge generator can be exploited instead of the teacher's softened distributions or combined with the existing distillation methods in a straightforward manner. Moreover, with slight modifications, VKD can be utilized not only for the self-knowledge distillation method but also for the collaborative learning method. We compare our method with several representative distillation methods in various combinations of teacher and student architectures on the image classification tasks. Experimental results on various image classification tasks demonstrate that VKD show a competitive performance compared to the conventional distillation methods, and when combined with them, the performance is improved with a substantial margin.
引用
收藏
页码:34766 / 34778
页数:13
相关论文
共 50 条
  • [31] Conditional pseudo-supervised contrast for data-Free knowledge distillation
    Shao, Renrong
    Zhang, Wei
    Wang, Jun
    PATTERN RECOGNITION, 2023, 143
  • [32] ResKD: Residual-Guided Knowledge Distillation
    Li, Xuewei
    Li, Songyuan
    Omar, Bourahla
    Wu, Fei
    Li, Xi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4735 - 4746
  • [33] Synergic Adversarial Label Learning for Grading Retinal Diseases via Knowledge Distillation and Multi-Task Learning
    Ju, Lie
    Wang, Xin
    Zhao, Xin
    Lu, Huimin
    Mahapatra, Dwarikanath
    Bonnington, Paul
    Ge, Zongyuan
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (10) : 3709 - 3720
  • [34] Teacher or supervisor? Effective online knowledge distillation via guided collaborative learning
    Borza, Diana Laura
    Ileni, Tudor Alexandru
    Marinescu, Alexandru Ion
    Darabant, Sergiu Adrian
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 228
  • [35] Compression of Acoustic Model via Knowledge Distillation and Pruning
    Li, Chenxing
    Zhu, Lei
    Xu, Shuang
    Gao, Peng
    Xu, Bo
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2785 - 2790
  • [36] Efficient Crowd Counting via Dual Knowledge Distillation
    Wang, Rui
    Hao, Yixue
    Hu, Long
    Li, Xianzhi
    Chen, Min
    Miao, Yiming
    Humar, Iztok
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 569 - 583
  • [37] Student Network Learning via Evolutionary Knowledge Distillation
    Zhang, Kangkai
    Zhang, Chunhui
    Li, Shikun
    Zeng, Dan
    Ge, Shiming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (04) : 2251 - 2263
  • [38] Improving Deep Mutual Learning via Knowledge Distillation
    Lukman, Achmad
    Yang, Chuan-Kai
    APPLIED SCIENCES-BASEL, 2022, 12 (15):
  • [39] CDFKD-MFS: Collaborative Data-Free Knowledge Distillation via Multi-Level Feature Sharing
    Hao, Zhiwei
    Luo, Yong
    Wang, Zhi
    Hu, Han
    An, Jianping
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 4262 - 4274
  • [40] KTransGAN: Variational Inference-Based Knowledge Transfer for Unsupervised Conditional Generative Learning
    Azzam, Mohamed
    Wu, Wenhao
    Cao, Wenming
    Wu, Si
    Wong, Hau-San
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 3318 - 3331