共 50 条
- [41] Boosting the Performance of Lightweight HAR Models with Attention and Knowledge Distillation 2024 INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS, IE 2024, 2024, : 1 - 8
- [42] Improving Multilingual Text-to-Speech with Mixture-of-Language-Experts and Accent Disentanglement INTERSPEECH 2024, 2024, : 4968 - 4972
- [46] Language-Routing Mixture of Experts for Multilingual and Code-Switching Speech Recognition INTERSPEECH 2023, 2023, : 1389 - 1393
- [47] Automatic Segmentation using Knowledge Distillation with Ensemble Models (ASKDEM) UNMANNED SYSTEMS TECHNOLOGY XXVI, 2024, 13055
- [48] Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation INTERSPEECH 2023, 2023, : 1364 - 1368