共 50 条
- [1] FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning INTERSPEECH 2022, 2022, : 3588 - 3592
- [4] On-Device Constrained Self-Supervised Speech Representation Learning for Keyword Spotting via Knowledge Distillation INTERSPEECH 2023, 2023, : 1623 - 1627
- [5] DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models INTERSPEECH 2023, 2023, : 62 - 66
- [8] Hierarchical Self-Supervised Learning for Knowledge-Aware Recommendation APPLIED SCIENCES-BASEL, 2024, 14 (20):
- [9] SIMILARITY ANALYSIS OF SELF-SUPERVISED SPEECH REPRESENTATIONS 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3040 - 3044
- [10] Self-Supervised Contrastive Learning for Camera-to-Radar Knowledge Distillation 2024 20TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING IN SMART SYSTEMS AND THE INTERNET OF THINGS, DCOSS-IOT 2024, 2024, : 154 - 161