共 50 条
- [41] Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12081 - 12095
- [42] Disfluencies and Fine-Tuning Pre-trained Language Models for Detection of Alzheimer's Disease INTERSPEECH 2020, 2020, : 2162 - 2166
- [43] Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1239 - 1248
- [44] Towards Efficient Fine-Tuning of Pre-trained Code Models: An Experimental Study and Beyond PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 39 - 51
- [45] SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 2177 - 2190
- [48] Fine-tuning Pre-trained Models for Robustness under Noisy Labels PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 3643 - 3651
- [49] Exploiting Syntactic Information to Boost the Fine-tuning of Pre-trained Models 2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 575 - 582