Data-Agnostic Model Poisoning Against Federated Learning: A Graph Autoencoder Approach

被引:8
作者
Li, Kai [1 ,2 ]
Zheng, Jingjing [3 ,4 ]
Yuan, Xin [5 ]
Ni, Wei [5 ]
Akan, Ozgur B. [6 ,7 ]
Poor, H. Vincent [8 ]
机构
[1] Univ Cambridge, Dept Engn, Cambridge CB3 0FA, England
[2] Real Time & Embedded Comp Syst Res Ctr CISTER, P-4249015 Porto, Portugal
[3] Carnegie Mellon Univ, CyLab Secur & Privacy Inst, Pittsburgh, PA 15213 USA
[4] Real Time & Embedded Comp Syst Res Ctr CISTER, P-4249015 Porto, Portugal
[5] Commonwealth Sci & Ind Res Org CSIRO, Digital Prod & Serv Flagship, Sydney, NSW 2122, Australia
[6] Univ Cambridge, Dept Engn, Div Elect Engn, Cambridge CB3 0FA, England
[7] Koc Univ, Ctr neXt generat Commun CXC, TR-34450 Istanbul, Turkiye
[8] Princeton Univ, Dept Elect & Comp Engn, Princeton, NJ 08544 USA
关键词
Federated learning; model poisoning attack; graph autoencoder; feature correlation; ATTACKS; IOT;
D O I
10.1109/TIFS.2024.3362147
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This paper proposes a novel, data-agnostic, model poisoning attack on Federated Learning (FL), by designing a new adversarial graph autoencoder (GAE)-based framework. The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability. By listening to the benign local models and the global model, the attacker extracts the graph structural correlations among the benign local models and the training data features substantiating the models. The attacker then adversarially regenerates the graph structural correlations while maximizing the FL training loss, and subsequently generates malicious local models using the adversarial graph structure and the training data features of the benign ones. A new algorithm is designed to iteratively train the malicious local models using GAE and sub-gradient descent. The convergence of FL under attack is rigorously proved, with a considerably large optimality gap. Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it. The attack can give rise to an infection across all benign devices, making it a serious threat to FL.
引用
收藏
页码:3465 / 3480
页数:16
相关论文
共 62 条
[41]  
Tran C, 2021, AAAI CONF ARTIF INTE, V35, P9932
[42]   LDP-Fed: Federated Learning with Local Differential Privacy [J].
Truex, Stacey ;
Liu, Ling ;
Chow, Ka-Ho ;
Gursoy, Mehmet Emre ;
Wei, Wenqi .
PROCEEDINGS OF THE THIRD ACM INTERNATIONAL WORKSHOP ON EDGE SYSTEMS, ANALYTICS AND NETWORKING (EDGESYS'20), 2020, :61-66
[43]  
Wang H., 2020, Advances in NeuralInformation Processing Systems, P16070
[44]   Secrecy Driven Federated Learning via Cooperative Jamming: An Approach of Latency Minimization [J].
Wang, Tianshun ;
Li, Yang ;
Wu, Yuan ;
Quek, Tony Q. S. .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (04) :1687-1703
[45]   A Simple Training Strategy for Graph Autoencoder [J].
Wang, Yingfeng ;
Xu, Biyun ;
Kwak, Myungjae ;
Zeng, Xiaoqin .
ICMLC 2020: 2020 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, 2018, :341-345
[46]  
Wang ZB, 2019, IEEE INFOCOM SER, P2512, DOI [10.1109/infocom.2019.8737416, 10.1109/INFOCOM.2019.8737416]
[47]   Federated Learning With Differential Privacy: Algorithms and Performance Analysis [J].
Wei, Kang ;
Li, Jun ;
Ding, Ming ;
Ma, Chuan ;
Yang, Howard H. ;
Farokhi, Farhad ;
Jin, Shi ;
Quek, Tony Q. S. ;
Vincent Poor, H. .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 :3454-3469
[48]  
Xiao H, 2017, Arxiv, DOI [arXiv:1708.07747, DOI 10.48550/ARXIV.1708.07747]
[49]   Securing Federated Learning: A Covert Communication-Based Approach [J].
Xie, Yuan-Ai ;
Kang, Jiawen ;
Niyato, Dusit ;
Van, Nguyen Thi Thanh ;
Luong, Nguyen Cong ;
Liu, Zhixin ;
Yu, Han .
IEEE NETWORK, 2023, 37 (01) :118-124
[50]   Scheduling Policies for Federated Learning in Wireless Networks [J].
Yang, Howard H. ;
Liu, Zuozhu ;
Quek, Tony Q. S. ;
Poor, H. Vincent .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2020, 68 (01) :317-333