Privacy-preserving federated learning compatible with robust aggregators

被引:2
作者
Alebouyeh, Zeinab [1 ]
Bidgoly, Amir Jalaly [1 ]
机构
[1] Univ Qom, Dept Engn, Qom, Iran
关键词
Federated learning; Membership inference attack; Byzantine attack; Robustness; Privacy-preserving;
D O I
10.1016/j.engappai.2025.110078
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning has emerged as a promising paradigm for collaborative machine learning across decentralized devices, offering the benefit of model training without centralized data aggregation. However, it also brings unique challenges related to security and privacy. Ensuring security and privacy is a critical challenge in federated learning. On one hand, sharing gradients or parameters of user models poses privacy risks; on the other hand, malicious users sending incorrect parameters can disrupt the global model on the server. Robust aggregators play a crucial role in handling such malicious users by scrutinizing transmitted vectors for anomalies. However, privacy-preserving methods often add noise or encryption to user vectors, which potentially affects the effectiveness of robust aggregators. In this paper, we have proposed a privacy-preserving method that maintains compatibility with robust aggregators by not directly altering the vectors of user parameters. Instead, noise is introduced to the training data or during model training. Our findings demonstrate that the proposed method not only reduces the accuracy of the membership inference attack to random levels but also maintains compatibility with robust aggregators without compromising the model's accuracy, thereby maintaining a balance between privacy, security, and efficiency.
引用
收藏
页数:14
相关论文
共 55 条
[1]   Benchmarking robustness and privacy-preserving methods in federated learning [J].
Alebouyeh, Zeinab ;
Bidgoly, Amir Jalaly .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 155 :18-38
[2]  
Ateniese G, 2013, Arxiv, DOI arXiv:1306.4447
[3]  
Bernstein J., 2019, 7 INT C LEARN REPR I
[4]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[5]  
Blanchard P, 2017, ADV NEUR IN, V30
[6]   FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping [J].
Cao, Xiaoyu ;
Fang, Minghong ;
Liu, Jia ;
Gong, Neil Zhenqiang .
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
[7]  
Chen YD, 2017, P ACM MEAS ANAL COMP, V1, DOI [10.1145/3130906, 10.1145/3154503]
[8]   FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority [J].
Dong, Ye ;
Chen, Xiaojun ;
Li, Kaiyun ;
Wang, Dakui ;
Zeng, Shuai .
COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 :497-518
[9]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[10]  
El Mhamdi El Mahdi, 2018, P MACHINE LEARNING R, V80