Beyond Model-Level Membership Privacy Leakage: an Adversarial Approach in Federated Learning

被引:33
作者
Chen, Jiale [1 ]
Zhang, Jiale [1 ]
Zhao, Yanchao [1 ]
Han, Hao [1 ]
Zhu, Kun [1 ]
Chen, Bing [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing, Peoples R China
来源
2020 29TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2020) | 2020年
关键词
Federated learning; Membership inference; Generative adversarial networks; User-level;
D O I
10.1109/icccn49398.2020.9209744
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the rise of privacy concerns in traditional centralized machine learning services, the federated learning, which incorporates multiple participants to train a global model across their localized training data, has lately received significant attention in both industry and academia. However, recent researches reveal the inherent vulnerabilities of the federated learning for the membership inference attacks that the adversary could infer whether a given data record belongs to the model's training set. Although the state-of-the-art techniques could successfully deduce the membership information from the centralized machine learning models, it is still challenging to infer the membership to a more confined level, user-level. In this paper, We propose a novel user-level inference attack mechanism in federated learning. Specifically, we first give a comprehensive analysis of active and targeted membership inference attacks in the context of the federated learning. Then, by considering a more complicated scenario that the adversary can only passively observe the updating models from different iterations, we incorporate the generative adversarial networks into our method, which can enrich the training set for the final membership inference model. The extensive experimental results demonstrate the effectiveness of our proposed attacking approach in the case of single-label and multi-label.
引用
收藏
页数:9
相关论文
共 29 条
[1]  
[Anonymous], 2013, ARXIV13064447
[2]   Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds [J].
Bassily, Raef ;
Smith, Adam ;
Thakurta, Abhradeep .
2014 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2014), 2014, :464-473
[3]   Large-Scale Machine Learning with Stochastic Gradient Descent [J].
Bottou, Leon .
COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, :177-186
[4]  
Çalik RC, 2018, I C COMP SYST APPLIC
[5]  
Deng L., 2012, IEEE Signal Processing Magazine, V29, P141, DOI [DOI 10.1109/MSP.2012.22114772, 10.1109/msp.2012.2211477]
[6]  
Garfinkel S. L., 2018, QUEUE, V16, P50, DOI DOI 10.1145/3291276.3295691
[7]   Fully Homomorphic Encryption Using Ideal Lattices [J].
Gentry, Craig .
STOC'09: PROCEEDINGS OF THE 2009 ACM SYMPOSIUM ON THEORY OF COMPUTING, 2009, :169-178
[8]  
Geyer R. C., 2017, DIFFERENTIALLY PRIVA
[9]  
Hayes J., CSCR170507663 ARXIV
[10]   Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning [J].
Hitaj, Briland ;
Ateniese, Giuseppe ;
Perez-Cruz, Fernando .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :603-618