共 35 条
- [1] Knowledge Distillation with the Reused Teacher Classifier [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11923 - 11932
- [2] Du Shangchen, 2020, advances in neural information processing systems, V33, P12345
- [3] Efficient Knowledge Distillation from an Ensemble of Teachers [J]. 18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 3697 - 3701
- [4] Knowledge Distillation: A Survey [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (06) : 1789 - 1819
- [5] He KM, 2020, IEEE T PATTERN ANAL, V42, P386, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]
- [6] Deep Residual Learning for Image Recognition [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
- [7] Hinton G, 2015, Arxiv, DOI [arXiv:1503.02531, DOI 10.48550/ARXIV.1503.02531]
- [8] A fast learning algorithm for deep belief nets [J]. NEURAL COMPUTATION, 2006, 18 (07) : 1527 - 1554
- [9] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
- [10] Krizhevsky Alex, 2009, LEARNING MULTIPLE LA