A Privacy-Preserving Federated Learning for Multiparty Data Sharing in Social IoTs

被引:100
作者
Yin, Lihua [1 ]
Feng, Jiyuan [1 ]
Xun, Hao [2 ]
Sun, Zhe [1 ]
Cheng, Xiaochun [3 ]
机构
[1] Guangzhou Univ, Cyberspace Inst Adv Technol, Guangzhou 510006, Peoples R China
[2] Cyberspace Secur Res Ctr, Peng Cheng Lab, Shenzhen 518000, Peoples R China
[3] Middlesex Univ, Dept Comp Sci, London NW44BT, England
来源
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING | 2021年 / 8卷 / 03期
基金
中国国家自然科学基金; 中国博士后科学基金; 国家重点研发计划;
关键词
Privacy; Encryption; Differential privacy; Training; Data privacy; Servers; Deep learning; Multiparty data sharing; federated learning; privacy-preserving; functional encryption; local differential privacy;
D O I
10.1109/TNSE.2021.3074185
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
As 5G and mobile computing are growing rapidly, deep learning services in the Social Computing and Social Internet of Things (IoT) have enriched our lives over the past few years. Mobile devices and IoT devices with computing capabilities can join social computing anytime and anywhere. Federated learning allows for the full use of decentralized training devices without the need for raw data, providing convenience in breaking data silos further and delivering more precise services. However, the various attacks illustrate that the current training process of federal learning is still threatened by disclosures at both the data and content levels. In this paper, we propose a new hybrid privacy-preserving method for federal learning to meet the challenges above. First, we employ an advanced function encryption algorithm that not only protects the characteristics of the data uploaded by each client, but also protects the weight of each participant in the weighted summation procedure. By designing local Bayesian differential privacy, the noise mechanism can effectively improve the adaptability of different distributed data sets. In addition, we also use Sparse Differential Gradient to improve the transmission and storage efficiency in federal learning training. Experiments show that when we use the sparse differential gradient to improve the transmission efficiency, the accuracy of the model is only dropped by 3% at most.
引用
收藏
页码:2706 / 2718
页数:13
相关论文
共 40 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   Multi-Input Functional Encryption for Inner Products: Function-Hiding Realizations and Constructions Without Pairings [J].
Abdalla, Michel ;
Catalano, Dario ;
Fiore, Dario ;
Gay, Romain ;
Ursu, Bogdan .
ADVANCES IN CRYPTOLOGY - CRYPTO 2018, PT I, 2018, 10991 :597-627
[3]  
[Anonymous], 2007, 7 IEEE INT C DAT MIN
[4]   Scalable and Secure Logistic Regression via Homomorphic Encryption [J].
Aono, Yoshinori ;
Hayashi, Takuya ;
Le Trieu Phong ;
Wang, Lihua .
CODASPY'16: PROCEEDINGS OF THE SIXTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, 2016, :142-144
[5]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[6]  
Boneh D, 2011, LECT NOTES COMPUT SC, V6597, P253, DOI 10.1007/978-3-642-19571-6_16
[7]  
Chai D., ARXIV190605108
[8]  
Dong J., 2019, ARXIV190502383
[9]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[10]  
Fienberg SE, 2006, LECT NOTES COMPUT SC, V4302, P277