A Virtual Knowledge Distillation via Conditional GAN

被引:6
作者
Kim, Sihwan [1 ]
机构
[1] Hana Inst Technol, Big Data & AI Lab, Seoul 06133, South Korea
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Training; Generators; Knowledge engineering; Bridges; Generative adversarial networks; Task analysis; Collaborative work; Image classification; model compression; knowledge distillation; self-knowledge distillation; collaborative learning; conditional generative adversarial network;
D O I
10.1109/ACCESS.2022.3163398
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Knowledge distillation aims at transferring the knowledge from a pre-trained complex model, called teacher, to a relatively smaller and faster one, called student. Unlike previous works that transfer the teacher's softened distributions or feature spaces, in this paper, we propose a novel approach, called Virtual Knowledge Distillation (VKD), that transfers a softened distribution generated by a virtual knowledge generator conditioned on class label. A virtual knowledge generator is trained independently, but concurrently with a teacher, to mimic the teacher's softened distributions. Afterwards, when training a student, virtual knowledge generator can be exploited instead of the teacher's softened distributions or combined with the existing distillation methods in a straightforward manner. Moreover, with slight modifications, VKD can be utilized not only for the self-knowledge distillation method but also for the collaborative learning method. We compare our method with several representative distillation methods in various combinations of teacher and student architectures on the image classification tasks. Experimental results on various image classification tasks demonstrate that VKD show a competitive performance compared to the conventional distillation methods, and when combined with them, the performance is improved with a substantial margin.
引用
收藏
页码:34766 / 34778
页数:13
相关论文
共 50 条
  • [41] PKDGAN: Private Knowledge Distillation With Generative Adversarial Networks
    Zhuo, Cheng
    Gao, Di
    Liu, Liangwei
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 775 - 788
  • [42] Knowledge Distillation for Image Signal Processing Using Only the Generator Portion of a GAN
    Heo, Youngjun
    Lee, Sunggu
    ELECTRONICS, 2022, 11 (22)
  • [43] Zero-Shot Text Normalization via Cross-Lingual Knowledge Distillation
    Wang, Linqin
    Huang, Xiang
    Yu, Zhengtao
    Peng, Hao
    Gao, Shengxiang
    Mao, Cunli
    Huang, Yuxin
    Dong, Ling
    Yu, Philip S.
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 4631 - 4646
  • [44] KDE-GAN: Enhancing Evolutionary GAN With Knowledge Distillation and Transfer Learning
    Liu, Zheping
    Song, Andy
    Sabar, Nasser
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 268 - 271
  • [45] Image Inpainting with GAN via Visual Knowledge Distillation from Large Text-Image Models
    Zhu, Zheng-An
    Chen, Liang-Yu
    Chiang, Chen-Kuo
    2024 11TH INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-TAIWAN, ICCE-TAIWAN 2024, 2024, : 313 - 314
  • [46] Self-Knowledge Distillation via Progressive Associative Learning
    Zhao, Haoran
    Bi, Yanxian
    Tian, Shuwen
    Wang, Jian
    Zhang, Peiying
    Deng, Zhaopeng
    Liu, Kai
    ELECTRONICS, 2024, 13 (11)
  • [47] Knowledge Distillation-Based Semantic Communications for Multiple Users
    Liu, Chenguang
    Zhou, Yuxin
    Chen, Yunfei
    Yang, Shuang-Hua
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (07) : 7000 - 7012
  • [48] Model Compression Algorithm via Reinforcement Learning and Knowledge Distillation
    Liu, Botao
    Hu, Bing-Bing
    Zhao, Ming
    Peng, Sheng-Lung
    Chang, Jou-Ming
    Tsoulos, Ioannis G.
    MATHEMATICS, 2023, 11 (22)
  • [49] Unpaired Multi-Modal Segmentation via Knowledge Distillation
    Dou, Qi
    Liu, Quande
    Heng, Pheng Ann
    Glocker, Ben
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (07) : 2415 - 2425
  • [50] Parameter-Efficient and Student-Friendly Knowledge Distillation
    Rao, Jun
    Meng, Xv
    Ding, Liang
    Qi, Shuhan
    Liu, Xuebo
    Zhang, Min
    Tao, Dacheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 4230 - 4241