Conditional pseudo-supervised contrast for data-Free knowledge distillation

被引:4
作者
Shao, Renrong [1 ]
Zhang, Wei [1 ]
Wang, Jun [1 ]
机构
[1] East China Normal Univ, Sch Comp Sci & Technol, 3663 North Zhongshan Rd, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Model compression; Knowledge distillation; Representation learning; Contrastive learning; Privacy protection;
D O I
10.1016/j.patcog.2023.109781
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data-free knowledge distillation (DFKD) is an effective manner to solve model compression and trans-mission restrictions while retaining privacy protection, which has attracted extensive attention in recent years. Currently, the majority of existing methods utilize a generator to synthesize images to support the distillation. Although the current methods have achieved great success, there are still many issues to be explored. Firstly, the outstanding performance of supervised learning in deep learning drives us to explore a pseudo-supervised paradigm on DFKD. Secondly, current synthesized methods cannot distin-guish the distributions of different categories of samples, thus producing ambiguous samples that may lead to an incorrect evaluation by the teacher. Besides, current methods cannot optimize the category -wise diversity samples, which will hinder the student model learning from diverse samples and further achieving better performance. In this paper, to address the above limitations, we propose a novel learning paradigm, i.e., conditional pseudo-supervised contrast for data-free knowledge distillation (CPSC-DFKD). The primary innovations of CPSC-DFKD are: (1) introducing a conditional generative adversarial network to synthesize category-specific diverse images for pseudo-supervised learning, (2) improving the mod-ules of the generator to distinguish the distributions of different categories, and (3) proposing pseudo -supervised contrastive learning based on teacher and student views to enhance diversity. Comprehensive experiments on three commonly-used datasets validate the performance lift of both the student and gen-erator brought by CPSC-DFKD. The code is available at https://github.com/RoryShao/CPSC-DFKD.git & COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 56 条
[1]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[2]  
Banner R., 2018, Advances in Neural Information Processing Systems, P5151
[3]  
Barratt S., 2018, ARXIV PREPRINT ARXIV
[4]  
Boo Y, 2021, AAAI CONF ARTIF INTE, V35, P6794
[5]  
Chen HT, 2020, AAAI CONF ARTIF INTE, V34, P3585
[6]   Data-Free Learning of Student Networks [J].
Chen, Hanting ;
Wang, Yunhe ;
Xu, Chang ;
Yang, Zhaohui ;
Liu, Chuanjian ;
Shi, Boxin ;
Xu, Chunjing ;
Xu, Chao ;
Tian, Qi .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3513-3521
[7]  
Chen T, 2020, PR MACH LEARN RES, V119
[8]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[9]  
Fang G., 2021, ARXIV210508584
[10]  
Fang GF, 2020, Arxiv, DOI [arXiv:1912.11006, DOI 10.48550/ARXIV.1912.11006]