FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority

被引:45
作者
Dong, Ye [1 ,2 ]
Chen, Xiaojun [1 ,2 ]
Li, Kaiyun [1 ,2 ]
Wang, Dakui [1 ]
Zeng, Shuai [1 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
来源
COMPUTER SECURITY - ESORICS 2021, PT I | 2021年 / 12972卷
关键词
Privacy-preserving; Byzantine-robust; Federated; Learning; Dishonest-majority; FRAMEWORK; EFFICIENT;
D O I
10.1007/978-3-030-88418-5_24
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Privacy and Byzantine-robustness are two major concerns of federated learning (FL), but mitigating both threats simultaneously is highly challenging: privacy-preserving strategies prohibit access to individual model updates to avoid leakage, while Byzantine-robust methods require access for comprehensive mathematical analysis. Besides, most Byzantine-robust methods only work in the honest-majority setting. We present FLOD, a novel oblivious defender for private Byzantinerobust FL in dishonest-majority setting. Basically, we propose a novel Hamming distance-based aggregation method to resist > 1/2 Byzantine attacks using a small root-dataset and server-model for bootstrapping trust. Furthermore, we employ two non-colluding servers and use additive homomorphic encryption (AHE) and secure two-party computation (2PC) primitives to construct efficient privacy-preserving building blocks for secure aggregation, in which we propose two novel in-depth variants of Beaver Multiplication triples (MT) to reduce the overhead of Bit to Arithmetic (Bit2A) conversion and vector weighted sum aggregation (VSWA) significantly. Experiments on real-world and synthetic datasets demonstrate our effectiveness and efficiency: (i) FLOD defeats known Byzantine attacks with a negligible effect on accuracy and convergence, (ii) achieves a reduction of similar to 2x for offline (resp. online) overhead of Bit2A and VSWA compared to ABY-AHE (resp. ABY-MT) based methods (NDSS'15), (iii) and reduces total online communication and run-time by 167-1416x and 3.1-7.4x compared to FLGUARD (Crypto Eprint 2021/025).
引用
收藏
页码:497 / 518
页数:22
相关论文
共 39 条
[1]  
Alistarh D, 2018, ADV NEUR IN, V31
[2]  
[Anonymous], 2020, Microsoft SEAL (release 3.6
[3]  
[Anonymous], 2017, Master's Thesis
[4]  
[Anonymous], 2012, P 2012 ACM C COMP CO, DOI [DOI 10.1145/2382196.2382279, 10.1145/2382196.2382279.]
[5]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[6]  
BEAVER D, 1992, LECT NOTES COMPUT SC, V576, P420
[7]  
Bernstein J, 2018, PR MACH LEARN RES, V80
[8]  
Blanchard P, 2017, ADV NEUR IN, V30
[9]  
Bogdanov D, 2008, LECT NOTES COMPUT SC, V5283, P192
[10]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191