Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks

被引:6
作者
Sun, Yuwei [1 ]
Chong, Ng S. T. [2 ]
Ochiai, Hideya [1 ]
机构
[1] Univ Tokyo, Grad Sch Informat Sci & Technol, Tokyo, Japan
[2] United Nations Univ, Campus Comp Ctr, Tokyo, Japan
来源
2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) | 2021年
关键词
adversarial attacks; federated learning; generative adversarial networks; information stealing; security and privacy;
D O I
10.1109/SMC52423.2021.9658652
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
An attack on deep learning systems where intelligent machines collaborate to solve problems could cause a node in the network to make a mistake on a critical judgment. At the same time, the security and privacy concerns of AI have galvanized the attention of experts from multiple disciplines. In this research, we successfully mounted adversarial attacks on a federated learning (FL) environment using three different datasets. The attacks leveraged generative adversarial networks (GANs) to affect the learning process and strive to reconstruct the private data of users by learning hidden features from shared local model parameters. The attack was target-oriented drawing data with distinct class distribution from the CIFAR-10, MNIST, and Fashion-MNIST respectively. Moreover, by measuring the Euclidean distance between the real data and the reconstructed adversarial samples, we evaluated the performance of the adversary in the learning processes in various scenarios. At last, we successfully reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.
引用
收藏
页码:2749 / 2754
页数:6
相关论文
共 15 条
[1]   Understanding Distributed Poisoning Attack in Federated Learning [J].
Cao, Di ;
Chang, Shan ;
Lin, Zhijian ;
Liu, Guohua ;
Sunt, Donghong .
2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, :233-239
[2]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[3]   Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning [J].
Hitaj, Briland ;
Ateniese, Giuseppe ;
Perez-Cruz, Fernando .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :603-618
[4]  
Krizhevsky Alex, 2009, CsTorontoEdu
[5]  
LECUN Y., 2010, MNIST handwritten digit database, DOI DOI 10.1109/PDP.2014.102
[6]   Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT [J].
Lu, Yunlong ;
Huang, Xiaohong ;
Dai, Yueyue ;
Maharjan, Sabita ;
Zhang, Yan .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (06) :4177-4186
[7]   Privacy-Preserving Deep Learning via Additively Homomorphic Encryption [J].
Phong, Le Trieu ;
Aono, Yoshinori ;
Hayashi, Takuya ;
Wang, Lihua ;
Moriai, Shiho .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2018, 13 (05) :1333-1345
[8]  
Piplai A., 2020, NATTACK ADVERSARIAL
[9]  
Shokri R, 2015, ANN ALLERTON CONF, P909, DOI 10.1109/ALLERTON.2015.7447103
[10]  
Sun Y., 2020, IEEE INT C MACH LEAR