A Privacy-Preserving Local Differential Privacy-Based Federated Learning Model to Secure LLM from Adversarial Attacks

被引:3
作者
Salim, Mikail Mohammed [1 ]
Deng, Xianjun [2 ]
Park, Jong Hyuk [1 ]
机构
[1] Seoul Natl Univ Sci & Technol SeoulTech, Dept Comp Sci & Engn, Seoul, South Korea
[2] Huazhong Univ Sci & Technol, Dept Cyber Sci & Engn, Wuhan, Peoples R China
基金
新加坡国家研究基金会;
关键词
Federated Learning; Local Differential Privacy; Blockchain; Secret Sharing; INTERNET;
D O I
10.22967/HCIS.2024.14.057
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Chatbot applications using large language models (LLMs) offer human-like responses to user queries, but their widespread use raises significant concerns about data privacy and integrity. Adversarial attacks can extract confidential data during model training and submit poisoned data, compromising chatbot reliability. Additionally, the transmission of unencrypted user data for local model training poses new privacy challenges. This paper addresses these issues by proposing a blockchain and federated learning-enabled LLM model to ensure user data privacy and integrity. A local differential privacy method adds noise to anonymize user data during the data collection phase for local training at the edge layer. Federated learning prevents the sharing of private local training data with the cloud-based global model. Secure multi-party computation using secret sharing and blockchain ensures secure and reliable model aggregation, preventing adversarial model poisoning. Evaluation results show a 46% higher accuracy in global model training compared to models trained with poisoned data. The study demonstrates that the proposed local differential privacy method effectively prevents adversarial attacks and protects federated learning models from poisoning during training, enhancing the security and reliability of chatbot applications.
引用
收藏
页数:25
相关论文
共 50 条
[31]   FASIL: A challenge-based framework for secure and privacy-preserving federated learning [J].
Karakoc, Ferhat ;
Paltun, Betul Guvenc ;
Karacay, Leyli ;
Tuna, Omer ;
Fuladi, Ramin ;
Gulen, Utku .
2024 8TH CYBER SECURITY IN NETWORKING CONFERENCE, CSNET, 2024, :182-189
[32]   Privacy-Preserving and Reliable Decentralized Federated Learning [J].
Gao, Yuanyuan ;
Zhang, Lei ;
Wang, Lulu ;
Choo, Kim-Kwang Raymond ;
Zhang, Rui .
IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (04) :2879-2891
[33]   Aldp-fl: an adaptive local differential privacy-based federated learning mechanism for IoT [J].
Li, Jinguo ;
Lu, Mengli ;
Zhang, Jin ;
Wu, Jing .
INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2025, 24 (01)
[34]   GAIN: Decentralized Privacy-Preserving Federated Learning [J].
Jiang, Changsong ;
Xu, Chunxiang ;
Cao, Chenchen ;
Chen, Kefei .
JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 78
[35]   Personalized Privacy-Preserving Federated Learning [J].
Boscher, Cedric ;
Benarba, Nawel ;
Elhattab, Fatima ;
Bouchenak, Sara .
PROCEEDINGS OF THE TWENTY-FIFTH ACM INTERNATIONAL MIDDLEWARE CONFERENCE, MIDDLEWARE 2024, 2024, :454-466
[36]   Frameworks for Privacy-Preserving Federated Learning [J].
Phong, Le Trieu ;
Phuong, Tran Thi ;
Wang, Lihua ;
Ozawa, Seiichi .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (01) :2-12
[37]   Adaptive privacy-preserving federated learning [J].
Liu, Xiaoyuan ;
Li, Hongwei ;
Xu, Guowen ;
Lu, Rongxing ;
He, Miao .
PEER-TO-PEER NETWORKING AND APPLICATIONS, 2020, 13 (06) :2356-2366
[38]   Privacy-Preserving and Reliable Federated Learning [J].
Lu, Yi ;
Zhang, Lei ;
Wang, Lulu ;
Gao, Yuanyuan .
ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT III, 2022, 13157 :346-361
[39]   Adaptive privacy-preserving federated learning [J].
Xiaoyuan Liu ;
Hongwei Li ;
Guowen Xu ;
Rongxing Lu ;
Miao He .
Peer-to-Peer Networking and Applications, 2020, 13 :2356-2366
[40]   Privacy-preserving Techniques in Federated Learning [J].
Liu Y.-X. ;
Chen H. ;
Liu Y.-H. ;
Li C.-P. .
Ruan Jian Xue Bao/Journal of Software, 2022, 33 (03) :1057-1092