Byzantine-Resilient Secure Federated Learning

被引:128
作者
So, Jinhyun [1 ]
Guler, Basak [2 ]
Avestimehr, A. Salman [1 ]
机构
[1] Univ Southern Calif, Dept Elect & Comp Engn, Los Angeles, CA 90089 USA
[2] Univ Calif Riverside, Dept Elect & Comp Engn, Riverside, CA 92521 USA
关键词
Federated learning; privacy-preserving machine learning; Byzantine-resilience; distributed training in mobile networks;
D O I
10.1109/JSAC.2020.3041404
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Secure federated learning is a privacy-preserving framework to improve machine learning models by training over large volumes of data collected by mobile users. This is achieved through an iterative process where, at each iteration, users update a global model using their local datasets. Each user then masks its local update via random keys, and the masked models are aggregated at a central server to compute the global model for the next iteration. As the local updates are protected by random masks, the server cannot observe their true values. This presents a major challenge for the resilience of the model against adversarial (Byzantine) users, who can manipulate the global model by modifying their local updates or datasets. Towards addressing this challenge, this paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning. BREA is based on an integrated stochastic quantization, verifiable outlier detection, and secure model aggregation approach to guarantee Byzantine-resilience, privacy, and convergence simultaneously. We provide theoretical convergence and privacy guarantees and characterize the fundamental trade-offs in terms of the network size, user dropouts, and privacy protection. Our experiments demonstrate convergence in the presence of Byzantine users, and comparable accuracy to conventional federated learning benchmarks.
引用
收藏
页码:2168 / 2181
页数:14
相关论文
共 43 条
[1]  
Acs Gergely, 2011, Information Hiding. 13th International Conference, IH 2011. Revised Selected Papers, P118, DOI 10.1007/978-3-642-24178-9_9
[2]  
Alistarh D, 2017, ADV NEURAL INF PROCE, P1709
[3]  
[Anonymous], 2018, 35 INT C MACH LEARN
[4]  
[Anonymous], 2017, BYZANTINE TOLERANT M
[5]  
[Anonymous], 2017, ADV NEURAL INF PROC
[6]  
[Anonymous], 2018, ADV NEURAL INFORM PR
[7]  
Avestimehr A. S., 2020, ARXIV200204156
[8]   Secure Single-Server Aggregation with (Poly)Logarithmic Overhead [J].
Bell, James Henry ;
Bonawitz, Kallista A. ;
Gascon, Adria ;
Lepoint, Tancrede ;
Raykova, Mariana .
CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, :1253-1269
[9]  
Bhagoji A. N., 2018, ARXIV181112470
[10]  
Bonawitz K., 2019, P 2 SYSML C