FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning

被引:0
作者
Ma, Zhuoran [1 ]
Huang, Xinyi [1 ]
Wang, Zhuzhu [2 ]
Qin, Zhan [3 ,4 ]
Wang, Xiangyu [1 ]
Ma, Jianfeng [1 ]
机构
[1] Xidian Univ, Sch Cyber Engn, Shaanxi Key Lab Network & Syst Secur, Xian 710071, Peoples R China
[2] Northwest Univ, Sch Informat Sci & Technol, Xian 710127, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Sch Cyber Sci & Technol, Hangzhou 310027, Peoples R China
[4] ZJU Hangzhou Global Sci & Technol Innovat Ctr, Hangzhou 311200, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Training; Adaptation models; Servers; Perturbation methods; Data models; Security; Forensics; Electronic mail; Data privacy; Computational modeling; Federated learning; model poisoning attack; non-IID; adaptive attack;
D O I
10.1109/TIFS.2025.3539087
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
FL is vulnerable to model poisoning attacks due to the invisibility of local data and the decentralized nature of FL training. The adversary attempts to maliciously manipulate local model gradients to compromise the global model (i.e., victim model). Commonly-studied model poisoning attacks heavily depend on accessing additional knowledge, such as local data and the aggregation algorithm from the victim model, which easily encounter practical obstacles due to limited adversarial knowledge. In this paper, we first reveal that aggregated gradients in FL can serve as an attack carrier, exposing the latent knowledge of the victim model. In particular, we propose a data-free model poisoning attack named FedGhost, which aims to redirect the training objective of FL towards the adversary's objective without any auxiliary information. In FedGhost, we design a black-box adaptive optimization algorithm to dynamically adjust the perturbation factor for malicious gradients, maximizing the poisoning impact of FL. Experimental results on five datasets in IID and Non-IID FL settings demonstrate that FedGhost achieves the highest attack success rate, outperforming other state-of-the-art model poisoning attacks by more than $10\%-60\%$ .
引用
收藏
页码:2096 / 2108
页数:13
相关论文
共 46 条
[41]  
Iandola F.N., Han S., Moskewicz M.W., Ashraf K., Dally W.J., Keutzer K., SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, (2016)
[42]  
CIFAR-10 Tutorial With PyTorch, (2023)
[43]  
VGG16 Pretrained Model., (2023)
[44]  
Hsu T.-M.H., Qi H., Brown M., Measuring the efiects of non-identical data distribution for federated visual classification, (2019)
[45]  
Kemeny J.G., Snell J.L., Markov Chains: Models, Properties, and Applications, (1957)
[46]  
McMahan B., Moore E., Ramage D., Hampson S., Arcas B.A.Y., Communication-eficient learning of deep networks from decentralized data, Proc. 20th Int. Conf. Artif. Intell. Statist., pp. 1273-1282, (2017)