共 50 条
- [2] Pruning Pre-trained Language ModelsWithout Fine-Tuning PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
- [3] Span Fine-tuning for Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
- [4] Waste Classification by Fine-Tuning Pre-trained CNN and GAN INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2021, 21 (08): : 65 - 70
- [5] Fine-Tuning Pre-Trained Language Models with Gaze Supervision PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 217 - 224
- [6] Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples 45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 3015 - 3033
- [7] Variational Monte Carlo on a Budget - Fine-tuning pre-trained NeuralWavefunctions ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [9] Fine-tuning Pre-trained Models for Robustness under Noisy Labels PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 3643 - 3651
- [10] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69