A Verifiable and Privacy-Preserving Federated Learning Training Framework

被引:1
作者
Duan, Haohua [1 ]
Peng, Zedong [2 ]
Xiang, Liyao [3 ]
Hu, Yuncong [3 ]
Li, Bo [4 ]
机构
[1] Shanghai Jiao Tong Univ, Comp Sci, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Informat Engn, Shanghai 200240, Peoples R China
[3] Shanghai Jiao Tong Univ, Shanghai 200240, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
基金
国家重点研发计划;
关键词
Protocols; Neural networks; Training; Logic gates; Servers; Backpropagation; Integrated circuit modeling; Zero knowledge proofs; privacy preserving; neural networks;
D O I
10.1109/TDSC.2024.3369658
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning allows multiple clients to collaboratively train a global model without revealing their private data. Despite its success in many applications, it remains a challenge to prevent malicious clients to corrupt the global model through uploading incorrect model updates. Hence, one critical issue arises in how to validate the training is truly conducted on legitimate neural networks. To address the issue, we propose VPNNT, a zero-knowledge proof scheme for neural network backpropagation. VPNNT enables each client to prove to others that the model updates (gradients) are indeed calculated on the global model of the previous round, without leaking any information about the client's private training data. Our proof scheme is generally applicable to any type of neural network. Different from conventional verification schemes constructing neural network operations by gate-level circuits, we improve verification efficiency by formulating the training process using custom gates - matrix operations, and apply an optimized linear time zero knowledge protocol for verification. Thanks to the recursive structure of neural network backward propagation, common custom gates are combined in verification thereby reducing prover and verifier costs over conventional zero knowledge proofs. Experimental results show that VPNNT is a lightweighted verification scheme for neural network backpropagation with an improved prove time, verification time and proof size.
引用
收藏
页码:5046 / 5058
页数:13
相关论文
共 44 条
[1]   Ligero: Lightweight Sublinear Arguments Without a Trusted Setup [J].
Ames, Scott ;
Hazay, Carmit ;
Ishai, Yuval ;
Venkitasubramaniam, Muthuramakrishnan .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :2087-2104
[2]  
[Anonymous], 2020, Spartan
[3]  
[Anonymous], 2020, Virgo
[4]  
[Anonymous], 2018, Hyrax
[5]  
[Anonymous], 2021, zkcnn
[6]  
[Anonymous], 2019, Libra
[7]  
Baum C., 2021, P 41 ANN INT CRYPT C, P122
[8]  
Baum C, 2020, LECT NOTES COMPUT SC, V12110, P495, DOI 10.1007/978-3-030-45374-9_17
[9]  
Blanchard P, 2017, ADV NEUR IN, V30
[10]   Post-Quantum Zero-Knowledge and Signatures from Symmetric-Key Primitives [J].
Chase, Melissa ;
Derler, David ;
Goldfeder, Steven ;
Orlandi, Claudio ;
Ramacher, Sebastian ;
Rechberger, Christian ;
Slamanig, Daniel ;
Zaverucha, Greg .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1825-1842