FLFT: A Large-Scale Pre-Training Model Distributed Fine-Tuning Method That Integrates Federated Learning Strategies

被引:0
作者
Tao, Yu [1 ]
Yang, Ruopeng [1 ]
Zeng, Kaisheng [2 ]
Yin, Changsheng [1 ]
Lu, Yiwei [1 ]
Lu, Wenxin [1 ]
Shi, Yongqi [1 ]
Wang, Bo [1 ]
Huang, Bo [1 ]
机构
[1] Natl Univ Def Technol, Coll Informat & Commun, Wuhan 430000, Peoples R China
[2] Tsinghua Univ, Dept Comp Sci, Beijing 100000, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Data models; Adaptation models; Federated learning; Computational modeling; Servers; Large language models; Artificial intelligence; Transformers; Analytical models; Large-scale pre-trained models; artificial intelligence; federated learning; machine learning; natural language processing;
D O I
10.1109/ACCESS.2025.3549819
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, large-scale pre-trained models have developed rapidly in general fields, but there are few practically viable technical solutions for application scenarios in specialized fields, especially in sensitive areas such as healthcare, finance, and even military domains that require precise, professional, and credible knowledge for decision-making assistance. Concentrating data on cloud servers and then efficiently fine-tuning large-scale pre-trained models is impractical for industries with high data privacy protection requirements. This paper proposes a distributed training strategy for large-scale pre-trained models that integrates federated learning strategies. First, local data is fine-tuned offline on various edge devices. Then, the parameters of the efficiently fine-tuned small-scale models (e.g., LORA) are uploaded for aggregation on the server. Finally, intelligent services and solutions are provided for complex scenarios in specialized fields.
引用
收藏
页码:56439 / 56453
页数:15
相关论文
共 32 条
[1]  
Achiam J., 2023, Open AI GPT-4 technical report, DOI [DOI 10.48550/ARXIV.2303.08774, 10.48550/arxiv.2303.08774]
[2]   Communication and computation efficiency in Federated Learning: A survey [J].
Almanifi, Omair Rashed Abdulwareth ;
Chow, Chee-Onn ;
Tham, Mau-Luen ;
Chuah, Joon Huang ;
Kanesan, Jeevan .
INTERNET OF THINGS, 2023, 22
[3]  
Army U.S., 2022, Field Manual 5-0: Planning and Orders Production
[4]  
Brown TB, 2020, ADV NEUR IN, V33
[5]  
Bubeck S, 2023, Arxiv, DOI arXiv:2303.12712
[6]  
Chen FL, 2023, Arxiv, DOI arXiv:2305.04160
[7]  
Chen M., 2021, arXiv, DOI DOI 10.48550/ARXIV.2107.03374
[8]  
DOTA Headquarters, 2014, Army Field Manual 6-0: Commander and Staff Organization and Operations
[9]  
Farmer M., 2022, Mil. Rev., V102, P64
[10]  
Houlsby N, 2019, Arxiv, DOI arXiv:1902.00751