FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning

被引:0
作者
Ma, Zhuoran [1 ]
Huang, Xinyi [1 ]
Wang, Zhuzhu [2 ]
Qin, Zhan [3 ,4 ]
Wang, Xiangyu [1 ]
Ma, Jianfeng [1 ]
机构
[1] Xidian Univ, Sch Cyber Engn, Shaanxi Key Lab Network & Syst Secur, Xian 710071, Peoples R China
[2] Northwest Univ, Sch Informat Sci & Technol, Xian 710127, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Sch Cyber Sci & Technol, Hangzhou 310027, Peoples R China
[4] ZJU Hangzhou Global Sci & Technol Innovat Ctr, Hangzhou 311200, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Training; Adaptation models; Servers; Perturbation methods; Data models; Security; Forensics; Electronic mail; Data privacy; Computational modeling; Federated learning; model poisoning attack; non-IID; adaptive attack;
D O I
10.1109/TIFS.2025.3539087
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
FL is vulnerable to model poisoning attacks due to the invisibility of local data and the decentralized nature of FL training. The adversary attempts to maliciously manipulate local model gradients to compromise the global model (i.e., victim model). Commonly-studied model poisoning attacks heavily depend on accessing additional knowledge, such as local data and the aggregation algorithm from the victim model, which easily encounter practical obstacles due to limited adversarial knowledge. In this paper, we first reveal that aggregated gradients in FL can serve as an attack carrier, exposing the latent knowledge of the victim model. In particular, we propose a data-free model poisoning attack named FedGhost, which aims to redirect the training objective of FL towards the adversary's objective without any auxiliary information. In FedGhost, we design a black-box adaptive optimization algorithm to dynamically adjust the perturbation factor for malicious gradients, maximizing the poisoning impact of FL. Experimental results on five datasets in IID and Non-IID FL settings demonstrate that FedGhost achieves the highest attack success rate, outperforming other state-of-the-art model poisoning attacks by more than $10\%-60\%$ .
引用
收藏
页码:2096 / 2108
页数:13
相关论文
共 46 条
[1]  
Konecny J., McMahan H.B., Yu F.X., Richtarik P., Suresh A.T., Bacon D., Federated learning: Strategies for improving communication eficiency, (2016)
[2]  
Yang Q., Liu Y., Chen T., Tong Y., Federated machine learning: Concept and applications, ACM Trans. Intell. Syst. Technol., 10, 2, pp. 1-19, (2019)
[3]  
Xu C., Qu Y., Luan T.H., Eklund P.W., Xiang Y., Gao L., An eficient and reliable asynchronous federated learning scheme for smart public transportation, IEEE Trans. Veh. Technol., 72, 5, pp. 6584-6598, (2023)
[4]  
Tan Y.N., Tinh V.P., Lam P.D., Nam N.H., Khoa T.A., A transfer learning approach to breast cancer classification in a federated learning framework, IEEE Access, 11, pp. 27462-27476, (2023)
[5]  
McMahan H.B., Moore E., Ramage D., Arcas B.A.Y., Federated learning of deep networks using model averaging, (2016)
[6]  
Sun Y., Ochiai H., Sakuma J., Attacking-distance-aware attack: Semi-targeted model poisoning on federated learning, IEEE Trans. Artif. Intell., 5, 2, pp. 925-939, (2024)
[7]  
Wei K., Li J., Ding M., Ma C., Jeon Y.-S., Poor H.V., Covert model poisoning against federated learning: Algorithm design and optimization, IEEE Trans. Dependable Secure Comput., 21, 3, pp. 1196-1209, (2024)
[8]  
Cao X., Gong N.Z., MPAF: Model poisoning attacks to federated learning based on fake clients, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 3396-3404, (2022)
[9]  
Wang Z., Ma J., Wang X., Hu J., Qin Z., Ren K., Threats to training: A survey of poisoning attacks and defenses on machine learning systems, ACM Comput. Surv., 55, 7, (2023)
[10]  
Li X., Qu Z., Zhao S., Tang B., Lu Z., Liu Y., LoMar: A local defense against poisoning attack on federated learning, IEEE Trans. Dependable Secure Comput., 20, 1, pp. 437-450, (2023)