共 48 条
- [1] CONVFIT: Conversational Fine-Tuning of Pretrained Language Models 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1151 - 1168
- [3] On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
- [4] Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 7870 - 7881
- [5] Prompt Engineering or Fine-Tuning? A Case Study on Phishing Detection with Large Language Models MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (01): : 367 - 384
- [6] Debiased Fine-Tuning for Vision-Language Models by Prompt Regularization THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3834 - 3842
- [7] Fine-Tuning Pretrained Language Models to Enhance Dialogue Summarization in Customer Service Centers PROCEEDINGS OF THE 4TH ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2023, 2023, : 365 - 373
- [8] Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12528 - 12540
- [9] Causal-Debias: Unifying Debiasing in Pretrained Language Models and Fine-tuning via Causal Invariant Learning PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4227 - 4241
- [10] How fine can fine-tuning be? Learning efficient language models INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2435 - 2442