Poisoning Attack in Federated Learning using Generative Adversarial Nets

被引:142
作者
Zhang, Jiale [1 ]
Chen, Junjun [2 ]
Wu, Di [3 ,4 ]
Chen, Bing [1 ]
Yu, Shui [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[3] Univ Technol Sydney, Sch Software, Sydney, NSW 2007, Australia
[4] Univ Technol Sydney, Ctr Artificial Intelligence, Sydney, NSW 2007, Australia
来源
2019 18TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS/13TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (TRUSTCOM/BIGDATASE 2019) | 2019年
基金
中国国家自然科学基金;
关键词
Federated learning; poisoning attack; generative adversarial nets; security; privacy;
D O I
10.1109/TrustCom/BigDataSE.2019.00057
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.
引用
收藏
页码:374 / 380
页数:7
相关论文
共 50 条
[31]   HidAttack: An Effective and Undetectable Model Poisoning Attack to Federated Recommenders [J].
Ali, Waqar ;
Umer, Khalid ;
Zhou, Xiangmin ;
Shao, Jie .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (03) :1227-1240
[32]   Blockchain-Based Federated Learning With SMPC Model Verification Against Poisoning Attack for Healthcare Systems [J].
Kalapaaking, Aditya Pribadi ;
Khalil, Ibrahim ;
Yi, Xun .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2024, 12 (01) :269-280
[33]   FLAIR: Defense against Model Poisoning Attack in Federated Learning [J].
Sharma, Atul ;
Chen, Wei ;
Zhao, Joshua ;
Qiu, Qiang ;
Bagchi, Saurabh ;
Chaterji, Somali .
PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, :553-+
[34]   Privacy-Enhanced Federated Learning Against Poisoning Adversaries [J].
Liu, Xiaoyuan ;
Li, Hongwei ;
Xu, Guowen ;
Chen, Zongqi ;
Huang, Xiaoming ;
Lu, Rongxing .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 :4574-4588
[35]   FedRecAttack: Model Poisoning Attack to Federated Recommendation [J].
Rong, Dazhong ;
Ye, Shuai ;
Zhao, Ruoyan ;
Yuen, Hon Ning ;
Chen, Jianhai ;
He, Qinming .
2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, :2643-2655
[36]   Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks [J].
Sun, Yuwei ;
Chong, Ng S. T. ;
Ochiai, Hideya .
2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, :2749-2754
[37]   DAG-GAN: CAUSAL STRUCTURE LEARNING WITH GENERATIVE ADVERSARIAL NETS [J].
Gao, Yinghua ;
Shen, Li ;
Xia, Shu-Tao .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :3320-3324
[38]   Leveraging Multiple Adversarial Perturbation Distances for Enhanced Membership Inference Attack in Federated Learning [J].
Xia, Fan ;
Liu, Yuhao ;
Jin, Bo ;
Yu, Zheng ;
Cai, Xingwei ;
Li, Hao ;
Zha, Zhiyong ;
Hou, Dai ;
Peng, Kai .
SYMMETRY-BASEL, 2024, 16 (12)
[39]   SNEGAN: Signed Network Embedding by Using Generative Adversarial Nets [J].
Ma, Lijia ;
Ma, Yuchun ;
Lin, Qiuzhen ;
Ji, Junkai ;
Coello, Carlos A. Coello ;
Gong, Maoguo .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (01) :136-149
[40]   Automatic Generation of Facial Expression Using Generative Adversarial Nets [J].
Kawai, Yoshiharu ;
Seo, Masataka ;
Chen, Yen-Wei .
2018 IEEE 7TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE 2018), 2018, :278-280