共 50 条
- [33] Dual discriminator adversarial distillation for data-free model compression International Journal of Machine Learning and Cybernetics, 2022, 13 : 1213 - 1230
- [35] AdaDS: Adaptive data selection for accelerating pre-trained language model knowledge distillation AI OPEN, 2023, 4 : 56 - 63
- [36] A Task-Efficient Gradient Guide Knowledge Distillation for Pre-train Language Model Compression ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 366 - 377
- [37] Semantic Segmentation Optimization Algorithm Based on Knowledge Distillation and Model Pruning 2019 2ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA (ICAIBD 2019), 2019, : 261 - 265
- [38] Effective Compression of Language Models by Combining Pruning and Knowledge Distillation 2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 429 - 438