Defending CNN against privacy leakage in edge computing via binary neural networks

被引:10
作者
Qiang, Weizhong [1 ,2 ]
Liu, Renwan [1 ,3 ]
Jin, Hai [1 ,3 ]
机构
[1] Huazhong Univ Sci & Technol, Serv Comp Technol & Syst Lab, Natl Engn Res Ctr Big Data Technol & Syst, Big Data Secur Engn Res Ctr,Cluster & Grid Comp L, Wuhan 430074, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Cyber Sci & Engn, Wuhan 430074, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2021年 / 125卷
基金
中国国家自然科学基金;
关键词
Privacy-preserving machine learning; Binary neural network; Edge computing;
D O I
10.1016/j.future.2021.06.037
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
As the IoT has developed, edge computing has played an increasingly important role in the IoT ecosystem. The edge computing paradigm offers low latency and high computing performance, which is conducive to machine learning tasks such as object detection in autonomous driving. However, data privacy risks in edge computing still exist and the existing privacy-preserving methods are not satisfactory due to the large computational overhead and unbearable accuracy loss. We have designed a privacy-preserving machine learning framework for both user and cloud data. Users and the cloud provide data for inference and training respectively, and the privacy protection of these two aspects is both considered in this paper. Users provide test data and want to access the data-processing models in cloud for inference, and the cloud provides the training data used for training an eligible model. For user data, in order to maintain the overall performance of the machine learning framework while using homomorphic encryption, instead of providing encrypted data to all machine learning tasks, we divide the neural network into two parts, with one part kept on the trusted edge and provided with plaintext, and the other deployed on the untrusted cloud and provided with encrypted input. For cloud data, we apply the binary neural network, a network with the binarized value of weights. This method is practical for narrowing the confidence score gap (between the training and test sets) predicted by the model, which accounts most for a successful exploratory attack on training data. Experiments demonstrate that the results of the adversary's membership inference attack are close to random guessing, and the accuracy is only slightly affected. Compared with the unencrypted network on VGG19, when the network is split from conv4_1 to fc8, the efficiency of using HE is only 100 to 30 times slower. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:460 / 470
页数:11
相关论文
共 38 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[3]  
[Anonymous], ages with Deep Neural Networks
[4]  
[Anonymous], 2019, WHIT PAPEDG COMP CON
[5]  
[Anonymous], 2009, LEARNING MULTIPLE LA
[6]  
[Anonymous], 2018, CISCO GLOBAL CLOUD I
[7]  
[Anonymous], 2018, MICROSOFT SEALSS
[8]  
[Anonymous], 2018, Faster CryptoNets: Leveraging sparsity for real-world encrypted inference
[9]  
Ateniese Giuseppe, 2015, International Journal of Security and Networks, V10, P137
[10]   Embedding differential privacy in decision tree algorithm with different depths [J].
Bai, Xuanyu ;
Yao, Jianguo ;
Yuan, Mingxuan ;
Deng, Ke ;
Xie, Xike ;
Guan, Haibing .
SCIENCE CHINA-INFORMATION SCIENCES, 2017, 60 (08)