Membership Inference Attacks and Defenses in Federated Learning: A Survey

被引:8
作者
Bai, Li [1 ]
Hu, Haibo [1 ]
Ye, Qingqing [1 ]
Li, Haoyang [1 ]
Wang, Leixia [2 ]
Xu, Jianliang [1 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[2] Renmin Univ China, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Membership inference attacks; federated learning; deep leaning; privacy risk; DIFFERENTIAL PRIVACY; SECURE;
D O I
10.1145/3704633
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning is a decentralized machine learning approach where clients train models locally and share model updates to develop a global model. This enables low-resource devices to collaboratively build a high- quality model without requiring direct access to the raw training data. However, despite only sharing model updates, federated learning still faces several privacy vulnerabilities. One of the key threats is membership inference attacks, which target clients' privacy by determining whether a specific example is part of the training set. These attacks can compromise sensitive information in real-world applications, such as medical diagnoses within a healthcare system. Although there has been extensive research on membership inference attacks, a comprehensive and up-to-date survey specifically focused on it within federated learning is still absent. To fill this gap, we categorize and summarize membership inference attacks and their corresponding defense strategies based on their characteristics in this setting. We introduce a unique taxonomy of existing attack research and provide a systematic overview of various countermeasures. For these studies, we thoroughly analyze the strengths and weaknesses of different approaches. Finally, we identify and discuss key future research directions for readers interested in advancing the field.
引用
收藏
页数:35
相关论文
共 173 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Aji Alham Fikri, 2017, P 2017 C EMP METH NA, P440, DOI DOI 10.18653/V1/D17-1045
[3]   Privacy-Preserving Parametric Inference: A Case for Robust Statistics [J].
Avella-Medina, Marco .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2021, 116 (534) :969-983
[4]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[5]  
Banerjee S, 2024, INT CONF COMPUT NETW, P635, DOI [10.1109/ICNC59896.2024.10556313, 10.1109/CNC59896.2024.10556313]
[6]   Quantifying identifiability to choose and audit ε in differentially private deep learning [J].
Bernau, Daniel ;
Eibl, Guenther ;
Grassal, Philip W. ;
Keller, Hannah ;
Kerschbaum, Florian .
PROCEEDINGS OF THE VLDB ENDOWMENT, 2021, 14 (13) :3335-3347
[7]  
Bernstein J., 2019, INT C LEARN REPR ICL
[8]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[9]  
Blanchard P, 2017, ADV NEUR IN, V30
[10]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191