Membership Inference Vulnerabilities in Peer-to-Peer Federated Learning

被引:0
作者
Luqman, Alka [1 ]
Chattopadhyay, Anupam [1 ]
Lam, Kwok Yan [1 ]
机构
[1] Nanyang Technol Univ, Singapore, Singapore
来源
PROCEEDINGS OF THE INAUGURAL ASIACCS 2023 WORKSHOP ON SECURE AND TRUSTWORTHY DEEP LEARNING SYSTEMS, SECTL | 2022年
基金
新加坡国家研究基金会;
关键词
federated learning; neural networks; privacy preserving federated learning;
D O I
10.1145/3591197.3593638
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning is emerging as an efficient approach to exploit data silos that form due to regulations about data sharing and usage, thereby leveraging distributed resources to improve the learning of ML models. It is a fitting technology for cyber physical systems in applications like connected autonomous vehicles, smart farming, IoT surveillance etc. By design, every participant in federated learning has access to the latest ML model. In such a scenario, it becomes all the more important to protect the model's knowledge, and to keep the training data and its properties private. In this paper, we survey the literature of ML attacks to assess the risks that apply in a peer-to-peer (P2P) federated learning setup. We perform membership inference attacks specifically in a P2P federated learning setting with colluding adversaries to evaluate the privacy-accuracy trade offs in a deep neural network thus demonstrating the extent of data leakage possible.
引用
收藏
页数:5
相关论文
共 56 条
[1]  
Agarwal N, 2018, ADV NEUR IN, V31
[2]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[3]  
Baruch M, 2019, ADV NEUR IN, V32
[4]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[5]  
McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963
[6]  
Choquette-Choo CA, 2021, PR MACH LEARN RES, V139
[7]   Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning [J].
Enthoven, David ;
Al-Ars, Zaid .
2022 9TH INTERNATIONAL CONFERENCE ON INTERNET OF THINGS: SYSTEMS, MANAGEMENT AND SECURITY, IOTSMS, 2022, :185-192
[8]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[9]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[10]  
Geiping Jonas, 2020, Advances in neural information processing systems, V33