Leverage Variational Graph Representation for Model Poisoning on Federated Learning

被引:3
作者
Li, Kai [1 ,2 ]
Yuan, Xin [3 ]
Zheng, Jingjing [2 ,4 ]
Ni, Wei [3 ]
Dressler, Falko [5 ]
Jamalipour, Abbas [6 ]
机构
[1] Univ Cambridge, Dept Engn, Cambridge CB3 0FA, England
[2] Real Time & Embedded Comp Syst Res Ctr CISTER, P-4249015 Porto, Portugal
[3] Commonwealth Sci & Ind Res Org CSIRO, Digital Prod & Serv Flagship, Marsfield, NSW 2122, Australia
[4] Carnegie Mellon Univ, CyLab Secur & Privacy Inst, Pittsburgh, PA 15213 USA
[5] TU Berlin, Sch Elect Engn & Comp Sci, D-10623 Berlin, Germany
[6] Univ Sydney, Sch Elect & Informat Engn, Sydney, NSW 2006, Australia
关键词
Data models; Training; Servers; Computational modeling; Correlation; Training data; Real-time systems; Data-untethered model poisoning (MP); federated learning (FL); variational graph autoencoders; ATTACK;
D O I
10.1109/TNNLS.2024.3394252
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article puts forth a new training data-untethered model poisoning (MP) attack on federated learning (FL). The new MP attack extends an adversarial variational graph autoencoder (VGAE) to create malicious local models based solely on the benign local models overheard without any access to the training data of FL. Such an advancement leads to the VGAE-MP attack that is not only efficacious but also remains elusive to detection. VGAE-MP attack extracts graph structural correlations among the benign local models and the training data features, adversarially regenerates the graph structure, and generates malicious local models using the adversarial graph structure and benign models' features. Moreover, a new attacking algorithm is presented to train the malicious local models using VGAE and sub-gradient descent, while enabling an optimal selection of the benign local models for training the VGAE. Experiments demonstrate a gradual drop in FL accuracy under the proposed VGAE-MP attack and the ineffectiveness of existing defense mechanisms in detecting the attack, posing a severe threat to FL.
引用
收藏
页码:116 / 128
页数:13
相关论文
共 41 条
  • [1] Blanchard P, 2017, ADV NEUR IN, V30
  • [2] Boyd S., 2004, Convex Optimization, DOI 10.1017/CBO9780511804441
  • [3] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [4] MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3395 - 3403
  • [5] Cemgil AT, 2020, ADV NEUR IN, V33
  • [6] Perception Poisoning Attacks in Federated Learning
    Chow, Ka-Ho
    Liu, Ling
    [J]. 2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
  • [7] Deng L., 2012, IEEE Signal Processing Magazine, V29, P141, DOI [DOI 10.1109/MSP.2012.2211477, 10.1109/MSP.2012.2211477]
  • [8] TLS/PKI Challenges and Certificate Pinning Techniques for IoT and M2M Secure Communications
    Diaz-Sanchez, Daniel
    Marin-Lopez, Andres
    Almenarez Mendoza, Florina
    Arias Cabarcos, Patricia
    Sherratt, R. Simon
    [J]. IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2019, 21 (04): : 3502 - 3531
  • [9] Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
  • [10] Fu C, 2022, PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, P1397