ParSecureML: An Efficient Parallel Secure Machine Learning Framework on GPUs

被引:6
作者
Chen, Zheng [1 ,2 ]
Zhang, Feng [1 ,2 ]
Zhou, Amelie Chi [3 ]
Zhai, Jidong [4 ]
Zhang, Chenyang [1 ,2 ]
Du, Xiaoyong [1 ,2 ]
机构
[1] Renmin Univ China, Key Lab Data Engn & Knowledge Engn, MOE, Beijing, Peoples R China
[2] Renmin Univ China, Sch Informat, Beijing, Peoples R China
[3] Shenzhen Univ, Guangdong Prov Engn Ctr China made High Performan, Shenzhen, Peoples R China
[4] Tsinghua Univ, Dept Comp Sci & Technol, BNRist, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 49TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2020 | 2020年
基金
国家重点研发计划; 北京市自然科学基金;
关键词
Security; GPU; Machine Learning; Two-Party Computation;
D O I
10.1145/3404397.3404399
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Machine learning has been widely used in our daily lives. Large amounts of data have been continuously produced and transmitted to the cloud for model training and data processing, which raises a problem: how to preserve the security of the data. Recently, a secure machine learning system named SecureML has been proposed to solve this issue using two-party computation. However, due to the excessive computation expenses of two-party computation, the secure machine learning is about 2x slower than the original machine learning methods. Previous work on secure machine learning mostly focused on novel protocols or improving accuracy, while the performance metric has been ignored. In this paper, we propose a GPU-based framework ParSecureML to improve the performance of secure machine learning algorithms based on two-party computation. The main challenges of developing ParSecureML lie in the complex computation patterns, frequent intra-node data transmission between CPU and GPU, and complicated inter-node data dependence. To handle these challenges, we propose a series of novel solutions, including profiling-guided adaptive GPU utilization, fine-grained double pipeline for intra-node CPU-GPU cooperation, and compressed transmission for inter-node communication. As far as we know, this is the first GPU-based secure machine learning framework. Compared to the state-of-the-art framework, ParSecureML achieves an average of 32.2x speedup. ParSecureML can be downloaded from https://github.com/ZhengChenCS/ParSecureML.
引用
收藏
页数:11
相关论文
共 65 条
[31]   Hierarchical Hybrid Memory Management in OS for Tiered Memory Systems [J].
Liu, Lei ;
Yang, Shengjie ;
Peng, Lu ;
Li, Xinyu .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2019, 30 (10) :2223-2236
[32]  
Ma GX, 2019, Arxiv, DOI arXiv:1811.02662
[33]  
Malkhi D, 2004, USENIX ASSOCIATION PROCEEDINGS OF THE 13TH USENIX SECURITY SYMPOSIUM, P287
[34]   NVIDIA Tensor Core Programmability, Performance & Precision [J].
Markidis, Stefano ;
Chien, Steven Wei Der ;
Laure, Erwin ;
Peng, Ivy Bo ;
Vetter, Jeffrey S. .
2018 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW 2018), 2018, :522-531
[35]  
Massoli Fabio Valerio, 2019, INT C IM AN PROC
[36]  
Matam Kiran Kumar, 2011, ICPP
[37]  
Michie D., 1994, Neural and Statistical Classification
[38]  
Mikolov T, 2010, 11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 1-2, P1045
[39]  
Mohassel P, 2017, P IEEE S SECUR PRIV, P19, DOI [10.1109/SP.2017.12, 10.1145/3132747.3132768]
[40]   ABY3: A Mixed Protocol Framework for Machine Learning [J].
Mohassel, Payman ;
Rindal, Peter .
PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, :35-52