VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learning

被引:184
作者
Guo, Xiaojie [1 ,2 ,3 ]
Liu, Zheli [1 ,2 ,3 ]
Li, Jin [1 ,4 ]
Gao, Jiqiang [1 ,2 ,3 ]
Hou, Boyu [1 ,2 ,3 ]
Dong, Changyu [5 ]
Baker, Thar [6 ]
机构
[1] Nankai Univ, Coll Cyber Sci, Tianjin 300071, Peoples R China
[2] Nankai Univ, Coll Comp Sci, Tianjin 300071, Peoples R China
[3] Nankai Univ, Tianjin Key Lab Network & Data Secur Technol, Tianjin 300071, Peoples R China
[4] Guangzhou Univ, Sch Comp Sci, Guangzhou 510006, Peoples R China
[5] Newcastle Univ, Sch Comp, Newcastle Upon Tyne NE1 7RU, Tyne & Wear, England
[6] Univ Sharjah, Dept Comp Sci, Coll Comp & Informat, Sharjah, U Arab Emirates
基金
英国工程与自然科学研究理事会; 中国国家自然科学基金;
关键词
Federated learning; verifiable aggregation; linearly homomorphic hash; commitment; machine learning;
D O I
10.1109/TIFS.2020.3043139
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) enables a large number of clients to collaboratively train a global model through sharing their gradients in each synchronized epoch of local training. However, a centralized server used to aggregate these gradients can be compromised and forge the result in order to violate privacy or launch other attacks, which incurs the need to verify the integrity of aggregation. In this work, we explore how to design communication-efficient and fast verifiable aggregation in FL. We propose VERIFL, a verifiable aggregation protocol, with O( N) (dimension-independent) communication and O(N + d) computation for verification in each epoch, where N is the number of clients and d is the dimension of gradient vectors. Since d can be large in some real-world FL applications (e.g., 100K), our dimension-independent communication is especially desirable for clients with limited bandwidth and high-dimensional gradients. In addition, the proposed protocol can be used in the FL setting where secure aggregation is needed or there is a subset of clients dropping out of protocol execution. Experimental results indicate that our protocol is efficient in these settings.
引用
收藏
页码:1736 / 1751
页数:16
相关论文
共 58 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   QUOTIENT: Two-Party Secure Neural Network Training and Prediction [J].
Agrawal, Nitin ;
Shamsabadi, Ali Shahin ;
Kusner, Matt J. ;
Gascon, Adria .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :1231-1247
[3]  
[Anonymous], 2017, ADV NEURAL INF PROC
[4]  
Bagdasaryan E, 2018, arXiv
[5]  
Bellare M., 1994, Advances in Cryptology - CRYPTO '94. 14th Annual International Cryptology Conference. Proceedings, P216
[6]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[7]   Threshold Cryptosystems from Threshold Fully Homomorphic Encryption [J].
Boneh, Dan ;
Gennaro, Rosario ;
Goldfeder, Steven ;
Jain, Aayush ;
Kim, Sam ;
Rasmussen, Peter M. R. ;
Sahai, Amit .
ADVANCES IN CRYPTOLOGY - CRYPTO 2018, PT I, 2018, 10991 :565-596
[8]   The Wonderful World of Global Random Oracles [J].
Camenisch, Jan ;
Drijvers, Manu ;
Gagliardoni, Tommaso ;
Lehmann, Anja ;
Neven, Gregory .
ADVANCES IN CRYPTOLOGY - EUROCRYPT 2018, PT I, 2018, 10820 :280-312
[9]   Efficient Multi-Key Homomorphic Encryption with Packed Ciphertexts with Application to Oblivious Neural Network Inference [J].
Chen, Hao ;
Dai, Wei ;
Kim, Miran ;
Song, Yongsoo .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :395-412
[10]  
Chen X., 2017, ARXIV171205526