PPCL: Privacy-preserving collaborative learning for mitigating indirect information leakage

被引:72
作者
Yan, Hongyang [1 ,2 ,5 ]
Hu, Li [1 ,2 ,5 ]
Xiang, Xiaoyu [1 ,2 ,5 ]
Liu, Zheli [3 ]
Yuan, Xu [4 ]
机构
[1] Guangzhou Univ, Sch Artificial Intelligence, Guangzhou, Peoples R China
[2] Guangzhou Univ, Blockchain, Guangzhou, Peoples R China
[3] Nankai Univ, Sch Cyber Sci, Tianjin, Peoples R China
[4] Univ Louisiana Lafayette, Lafayette, LA 70504 USA
[5] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
Collaborative Learning; Privacy-Preserving; Network Transformation; Network Pruning;
D O I
10.1016/j.ins.2020.09.064
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Collaborative learning and related techniques such as federated learning, allow multiple clients to train a model jointly while keeping their datasets at local. Secure Aggregation in most existing works focus on protecting model gradients from the server. However, an dishonest user could still easily get the privacy information from the other users. It remains a challenge to propose an effective solution to prevent information leakage against dishonest users. To tackle this challenge, we propose a novel and effective privacy-preserving collaborative machine learning scheme, targeting at preventing information leakage agains adversaries. Specifically, we first propose a privacy-preserving network transformation method by utilizing Random-Permutation in Software Guard Extensions(SGX), which protects the model parameters from being inferred by a curious server and dishonest clients. Then, we apply Partial-Random Uploading mechanism to mitigate the information inference through visualizations. To further enhance the efficiency, we introduce network pruning operation and employ it to accelerate the convergence of training. We present the formal security analysis to demonstrate that our proposed scheme can preserve privacy while ensuring the convergence and accuracy of secure aggregation. We conduct experiments to show the performance of our solution in terms of accuracy and efficiency. The experimental results show that the proposed scheme is practical. (c) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页码:423 / 437
页数:15
相关论文
共 39 条
[1]  
Anwar A., HYBRIDALPHA EFFICIEN
[2]  
Arnautov S, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P689
[3]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[4]   Large-Scale Machine Learning with Stochastic Gradient Descent [J].
Bottou, Leon .
COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, :177-186
[5]   SecureKeeper: Confidential ZooKeeper using Intel SGX [J].
Brenner, Stefan ;
Wulf, Colin ;
Goltzsche, David ;
Weichbrodt, Nico ;
Lorenz, Matthias ;
Fetzer, Christof ;
Pietzuch, Peter ;
Kapitza, Rudiger .
MIDDLEWARE '16: PROCEEDINGS OF THE 17TH INTERNATIONAL MIDDLEWARE CONFERENCE, 2016,
[6]  
Chen L., ARXIV PREPRINT ARXIV
[7]   A training-integrity privacy-preserving federated learning scheme with trusted execution environment [J].
Chen, Yu ;
Luo, Fang ;
Li, Tong ;
Xiang, Tao ;
Liu, Zheli ;
Li, Jin .
INFORMATION SCIENCES, 2020, 522 :69-79
[8]  
Contributors T., PYT DOC
[9]  
Costan V., 2016, IACR CRYPTOLOGY EPRI, V2016, P86, DOI DOI 10.1159/000088809
[10]  
Denas O., 2013, REPRESENTATION LEARN