PriVeriFL: Privacy-Preserving and Aggregation-Verifiable Federated Learning

被引:8
作者
Wang, Lulu [1 ,2 ]
Polato, Mirko [3 ]
Brighente, Alessandro [4 ]
Conti, Mauro [4 ]
Zhang, Lei [1 ,2 ]
Xu, Lin [1 ,2 ]
机构
[1] East China Normal Univ, Minist Educ, Engn Res Ctr Software Hardware Codesign Technol &, Shanghai 200062, Peoples R China
[2] Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China
[3] Univ Turin, Dept Comp Sci, I-10124 Turin, Italy
[4] Univ Padua, Dept Math, I-35121 Padua, Italy
关键词
Data models; Data privacy; Privacy; Training; Computational modeling; Analytical models; Homomorphic encryption; Aggregation integrity; data privacy; federated learning; homomorphic encryption; homomorphic hash; INFERENCE; ATTACKS; SECURE;
D O I
10.1109/TSC.2024.3451183
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning provides a collaborative way to build machine learning models without sharing private data. However, attackers might infer private information from model updates submitted by participants, and the aggregator might maliciously forge the final aggregation results. Federated learning still faces data privacy and aggregation integrity challenges. In this paper, we combine inference attacks and information theory to analyze the sensitivity of different bits of model parameters. We conclude that not all bits of model parameters will leak privacy. This realization inspires us to propose a novel low-expansion homomorphic aggregation scheme based on Paillier homomorphic encryption (PHE) for safeguarding participants' data privacy. Building upon this, we develop PriVeriFL-A, a privacy-preserving and aggregation-verifiable federated learning scheme that combines homomorphic hash function and signature. To prevent collusion attacks between the aggregator and malicious participants, we further improve our PHE-based scheme into a threshold PHE-based one, named PriVeriFL-B. Compared with the privacy-preserving federated learning scheme based on classic PHE, PriVeriFL-A reduces the communication overhead to 1.65%, and the encryption/decryption computation overhead to 0.88%. Both PriVeriFL-A and PriVeriFL-B can effectively verify the integrity of the global model, while maintaining an almost negligible communication overhead for integrity verification and protecting the privacy of participants' data.
引用
收藏
页码:998 / 1011
页数:14
相关论文
共 57 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   Charm: a framework for rapidly prototyping cryptosystems [J].
Akinyele, Joseph A. ;
Garman, Christina ;
Miers, Ian ;
Pagano, Matthew W. ;
Rushanan, Michael ;
Green, Matthew ;
Rubin, Aviel D. .
JOURNAL OF CRYPTOGRAPHIC ENGINEERING, 2013, 3 (02) :111-128
[3]  
[Anonymous], 2002, Information Security and Cryptography
[4]  
[Anonymous], 2017, NIPS AUTODIFF WORKSH
[5]  
Barker E., 2006, Recommendation for key management
[6]  
Bellare M., 1994, Advances in Cryptology - CRYPTO '94. 14th Annual International Cryptology Conference. Proceedings, P216
[7]  
Bonawitz K., 2019, Proc. Mach. Learn. Res, V1, P374
[8]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[9]   EFFICIENT FULLY HOMOMORPHIC ENCRYPTION FROM (STANDARD) LWE [J].
Brakerski, Zvika ;
Vaikuntanathan, Vinod .
SIAM JOURNAL ON COMPUTING, 2014, 43 (02) :831-871
[10]   Homomorphic Encryption for Arithmetic of Approximate Numbers [J].
Cheon, Jung Hee ;
Kim, Andrey ;
Kim, Miran ;
Song, Yongsoo .
ADVANCES IN CRYPTOLOGY - ASIACRYPT 2017, PT I, 2017, 10624 :409-437