共 50 条
- [32] DISTILLING FACIAL KNOWLEDGE WITH TEACHER-TASKS: SEMANTIC-SEGMENTATION-FEATURES FOR POSE-INVARIANT FACE-RECOGNITION [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 741 - 745
- [33] Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings [J]. PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1513 - 1523
- [35] Collaborative Multiple-Student Single-Teacher for Online Learning [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I, 2022, 13529 : 515 - 525
- [38] Distilling Structured Knowledge into Embeddings for Explainable and Accurate Recommendation [J]. PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 735 - 743
- [39] Distilling the Knowledge of BERT for Sequence-to-Sequence ASR [J]. INTERSPEECH 2020, 2020, : 3635 - 3639
- [40] Distilling the Knowledge of Romanian BERTs Using Multiple Teachers [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 374 - 384