共 50 条
- [1] FedPETuning: When Federated Learning Meets the Parameter-Efficient Tuning Methods of Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 9963 - 9977
- [2] Parameter-efficient fine-tuning of large-scale pre-trained language models Nature Machine Intelligence, 2023, 5 : 220 - 235
- [4] REDUCING COMMUNICATION OVERHEAD IN FEDERATED LEARNING FOR PRE-TRAINED LANGUAGE MODELS USING PARAMETER-EFFICIENT FINETUNING CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 456 - 469
- [5] Parameter-Efficient Fine-Tuning of Pre-trained Large Language Models for Financial Text Analysis ARTIFICIAL INTELLIGENCE RESEARCH, SACAIR 2024, 2025, 2326 : 3 - 20
- [6] Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8506 - 8515
- [7] ADT: An Additive Delta-Tuning approach for parameter-efficient tuning in pre-trained language models 2024 6TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING, ICNLP 2024, 2024, : 382 - 386
- [8] Hadamard Adapter: An Extreme Parameter-Efficient Adapter Tuning Method for Pre-trained Language Models PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 276 - 285
- [9] Federated Learning of Large Language Models with Parameter-Efficient Prompt Tuning and Adaptive Optimization 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 7871 - 7888