A Privacy-Preserving Federated Learning for Multiparty Data Sharing in Social IoTs

被引:100
作者
Yin, Lihua [1 ]
Feng, Jiyuan [1 ]
Xun, Hao [2 ]
Sun, Zhe [1 ]
Cheng, Xiaochun [3 ]
机构
[1] Guangzhou Univ, Cyberspace Inst Adv Technol, Guangzhou 510006, Peoples R China
[2] Cyberspace Secur Res Ctr, Peng Cheng Lab, Shenzhen 518000, Peoples R China
[3] Middlesex Univ, Dept Comp Sci, London NW44BT, England
来源
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING | 2021年 / 8卷 / 03期
基金
中国国家自然科学基金; 中国博士后科学基金; 国家重点研发计划;
关键词
Privacy; Encryption; Differential privacy; Training; Data privacy; Servers; Deep learning; Multiparty data sharing; federated learning; privacy-preserving; functional encryption; local differential privacy;
D O I
10.1109/TNSE.2021.3074185
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
As 5G and mobile computing are growing rapidly, deep learning services in the Social Computing and Social Internet of Things (IoT) have enriched our lives over the past few years. Mobile devices and IoT devices with computing capabilities can join social computing anytime and anywhere. Federated learning allows for the full use of decentralized training devices without the need for raw data, providing convenience in breaking data silos further and delivering more precise services. However, the various attacks illustrate that the current training process of federal learning is still threatened by disclosures at both the data and content levels. In this paper, we propose a new hybrid privacy-preserving method for federal learning to meet the challenges above. First, we employ an advanced function encryption algorithm that not only protects the characteristics of the data uploaded by each client, but also protects the weight of each participant in the weighted summation procedure. By designing local Bayesian differential privacy, the noise mechanism can effectively improve the adaptability of different distributed data sets. In addition, we also use Sparse Differential Gradient to improve the transmission and storage efficiency in federal learning training. Experiments show that when we use the sparse differential gradient to improve the transmission efficiency, the accuracy of the model is only dropped by 3% at most.
引用
收藏
页码:2706 / 2718
页数:13
相关论文
共 40 条
[21]  
McMahan Brendan, 2017, Google Research Blog
[22]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[23]   Exploiting Unintended Feature Leakage in Collaborative Learning [J].
Melis, Luca ;
Song, Congzheng ;
De Cristofaro, Emiliano ;
Shmatikov, Vitaly .
2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, :691-706
[24]  
Pettai M., 2015, P 12 ACM WORKSH ART, P421, DOI DOI 10.1145/2818000.2818027
[25]   Privacy-Preserving Deep Learning via Additively Homomorphic Encryption [J].
Phong, Le Trieu ;
Aono, Yoshinori ;
Hayashi, Takuya ;
Wang, Lihua ;
Moriai, Shiho .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2018, 13 (05) :1333-1345
[26]  
Ryffel T, 2018, ABS181104017, P4017
[27]   Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks [J].
Savazzi, Stefano ;
Nicoli, Monica ;
Rampa, Vittorio .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (05) :4641-4654
[28]   Membership Inference Attacks Against Machine Learning Models [J].
Shokri, Reza ;
Stronati, Marco ;
Song, Congzheng ;
Shmatikov, Vitaly .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :3-18
[29]   Privacy-Preserving Deep Learning [J].
Shokri, Reza ;
Shmatikov, Vitaly .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1310-1321
[30]  
Song S, 2013, IEEE GLOB CONF SIG, P245, DOI 10.1109/GlobalSIP.2013.6736861