共 50 条
- [22] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153
- [24] Revisiting k-NN for Fine-Tuning Pre-trained Language Models CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 327 - 338
- [26] Improving Pre-Trained Weights through Meta-Heuristics Fine-Tuning 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
- [27] Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
- [28] Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
- [29] An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2286 - 2300
- [30] Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12813 - 12832