Privacy-preserving model training architecture for intelligent edge computing

被引:22
作者
Qu, Xidi [1 ]
Hu, Qin [2 ]
Wang, Shengling [1 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, Beijing, Peoples R China
[2] Indiana Univ Purdue Univ Indianapolis, Dept Comp & Informat Sci, Indianapolis, IN USA
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Intelligent edge; Federated learning; Incentive mechanism; Privacy preservation;
D O I
10.1016/j.comcom.2020.07.045
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of artificial intelligence and increasing data generated by end devices, the traditional cloud-centric data processing is gradually replaced by intelligent edge computing to achieve faster and nearer service via breaking the limit of network bandwidth and communication delay. However, training machine learning (ML) models on end devices is severely resource-constrained; besides, the privacy protection and continuous improvement of ML models are challenging. To address these problems, we propose an ML model training architecture to achieve intelligent edge computing in a novel cloud-edge-device cooperative manner, which is consisted of two phases: (1) the cooperative federated pre-training phase between the cloud and edge server is inspired by federated learning, coming with an incentive mechanism for fair reward allocation according to the contribution of edge servers for pre-training the model; (2) the privacy-preserving model segmentation training phase between the edge server and device leverages homomorphic encryption to realize model improvement and protection on end devices while transferring a large amount of computation to edge servers. Extensive simulations based on synthetic and real-world data demonstrate the effectiveness and feasibility of our proposed framework.
引用
收藏
页码:94 / 101
页数:8
相关论文
共 29 条
[1]  
[Anonymous], 2018, P USENIX WORKSHOP HO
[2]  
[Anonymous], 2016, ARXIV161109726
[3]  
[Anonymous], 2016, ARXIV161104581
[4]  
[Anonymous], 2014, arXiv
[5]  
[Anonymous], 2018, ARXIV180803949
[6]  
[Anonymous], 2018, Faster CryptoNets: Leveraging sparsity for real-world encrypted inference
[7]   Scalable and Secure Logistic Regression via Homomorphic Encryption [J].
Aono, Yoshinori ;
Hayashi, Takuya ;
Le Trieu Phong ;
Wang, Lihua .
CODASPY'16: PROCEEDINGS OF THE SIXTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, 2016, :142-144
[8]  
Bajard Jean-Claude, 2017, Selected Areas in Cryptography - SAC 2016. 23rd International Conference. Revised Selected Papers: LNCS 10532, P423, DOI 10.1007/978-3-319-69453-5_23
[9]   Bootstrapping for Approximate Homomorphic Encryption [J].
Cheon, Jung Hee ;
Han, Kyoohyung ;
Kim, Andrey ;
Kim, Miran ;
Song, Yongsoo .
ADVANCES IN CRYPTOLOGY - EUROCRYPT 2018, PT I, 2018, 10820 :360-384
[10]  
Daily J., 2018, Gossipgrad: Scalable deep learning using gossip communication based asynchronous gradient descent