PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems

被引:192
作者
Zhang, Jiale [1 ,2 ]
Chen, Bing [1 ,2 ]
Cheng, Xiang [1 ,2 ]
Huynh Thi Thanh Binh [3 ]
Yu, Shui [4 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut, Sci & Technol Avion Integrat Lab, Nanjing 211106, Peoples R China
[3] Hanoi Univ Sci & Technol, Sch Informat & Commun Technol, Hanoi 100000, Vietnam
[4] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW 2007, Australia
基金
中国国家自然科学基金;
关键词
Data models; Training; Training data; Computational modeling; Gallium nitride; Machine learning; Servers; Backdoor attack; federated learning; generative adversarial nets; label flipping; poisoning attacks;
D O I
10.1109/JIOT.2020.3023126
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Edge computing is a key-enabling technology that meets continuously increasing requirements for the intelligent Internet-of-Things (IoT) applications. To cope with the increasing privacy leakages of machine learning while benefiting from unbalanced data distributions, federated learning has been wildly adopted as a novel intelligent edge computing framework with a localized training mechanism. However, recent studies found that the federated learning framework exhibits inherent vulnerabilities on active attacks, and poisoning attack is one of the most powerful and secluded attacks where the functionalities of the global model could be damaged through attacker's well-crafted local updates. In this article, we give a comprehensive exploration of the poisoning attack mechanisms in the context of federated learning. We first present a poison data generation method, named Data_Gen, based on the generative adversarial networks (GANs). This method mainly relies upon the iteratively updated global model parameters to regenerate samples of interested victims. Second, we further propose a novel generative poisoning attack model, named PoisonGAN, against the federated learning framework. This model utilizes the designed Data_Gen method to efficiently reduce the attack assumptions and make attacks feasible in practice. We finally evaluate our data generation and attack models by implementing two types of typical poisoning attack strategies, label flipping and backdoor, on a federated learning prototype. The experimental results demonstrate that these two attack models are effective in federated learning.
引用
收藏
页码:3310 / 3322
页数:13
相关论文
共 38 条
[1]  
[Anonymous], 2017, GENERATIVE POISONING
[2]  
[Anonymous], 2017, Advances in Neural Information Processing Systems 30, DOI DOI 10.48550/ARXIV.1611.01046
[3]  
Bagdasaryan E., 2020, PR MACH LEARN RES, P1
[4]  
Baracaldo N, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P103, DOI 10.1145/3128572.3140450
[5]  
Bhagoji A. N, 2018, Analyzing federated learning through an adversarial lens
[6]  
Biggio Battista, 2012, P 29 INT COF INT C M, P1467, DOI 10.48550/arxiv.1206.6389
[7]  
Blanchard P, 2017, ADV NEUR IN, V30
[8]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[9]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[10]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672