共 50 条
- [1] DIALOGLM: Pre-trained Model for Long Dialogue Understanding and Summarization THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11765 - 11773
- [3] Investigating Pre-trained Audio Encoders in the Low-Resource Condition INTERSPEECH 2023, 2023, : 1498 - 1502
- [4] On the Transferability of Pre-trained Language Models for Low-Resource Programming Languages 30TH IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2022), 2022, : 401 - 412
- [5] Embedding Articulatory Constraints for Low-resource Speech Recognition Based on Large Pre-trained Model INTERSPEECH 2023, 2023, : 1394 - 1398
- [6] Named-Entity Recognition for a Low-resource Language using Pre-Trained Language Model 37TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, 2022, : 837 - 844
- [7] A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition PROCEEDINGS OF THE 7TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP, 2022, : 46 - 59
- [10] Efficient Fine-Tuning for Low-Resource Tibetan Pre-trained Language Models ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 410 - 422