Differentially Private Robust ADMM for Distributed Machine Learning

被引:0
作者
Ding, Jiahao [1 ]
Zhang, Xinyue [1 ]
Chen, Mingsong [2 ]
Xue, Kaiping [3 ]
Zhang, Chi [4 ]
Pan, Miao [1 ]
机构
[1] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77204 USA
[2] East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China
[3] Univ Sci & Technol China, Dept EEIS, Hefei 230027, Peoples R China
[4] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
来源
2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) | 2019年
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
differential privacy; distributed machine learning; ADMM; decentralized optimization; CONVERGENCE; CONSENSUS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To embrace the era of big data, there has been growing interest in designing distributed machine learning to exploit the collective computing power of the local computing nodes. Alternating Direction Method of Multipliers (ADMM) is one of the most popular methods. This method applies iterative local computations over local datasets at each agent and computation results exchange between the neighbors. During this iterative process, data privacy leakage arises when performing local computation over sensitive data. Although many differentially private ADMM algorithms have been proposed to deal with such privacy leakage, they still have to face many challenging issues such as low model accuracy over strict privacy constraints and requiring strong assumptions of convexity of the objective function. To address those issues, in this paper, we propose a differentially private robust ADMM algorithm (PR-ADMM) with Gaussian mechanism. We employ two kinds of noise variance decay schemes to carefully adjust the noise addition in the iterative process and utilize a threshold to eliminate the too noisy results from neighbors. We also prove that PR-ADMM satisfies dynamic zero-concentrated differential privacy (dynamic zCDP) and a total privacy loss is given by (epsilon, delta)-differential privacy. From a theoretical point of view, we analyze the convergence rate of PR-ADMM for general convex objectives, which is O(1/K) with K being the number of iterations. The performance of the proposed algorithm is evaluated on real-world datasets. The experimental results show that the proposed algorithm outperforms other differentially private ADMM based algorithms under the same total privacy loss.
引用
收藏
页码:1302 / 1311
页数:10
相关论文
共 22 条
[1]  
[Anonymous], 2016, CISC VIS NETW IND GL
[2]  
[Anonymous], 2016, P 2016 ACM SIGSAC C
[3]  
Boyd Stephen P., 2014, Convex Optimization
[4]   Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds [J].
Bun, Mark ;
Steinke, Thomas .
THEORY OF CRYPTOGRAPHY, TCC 2016-B, PT I, 2016, 9985 :635-658
[5]  
Chaudhuri K., 2013, ADV NEURAL INFORM PR
[6]  
Ding J., 2019, CORR
[7]  
Ding Jiahao, 2019, INT C SEC PRIV COMM
[8]  
Dwork C, 2006, LECT NOTES COMPUT SC, V4004, P486
[9]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[10]   DP-ADMM: ADMM-Based Distributed Learning With Differential Privacy [J].
Huang, Zonghao ;
Hu, Rui ;
Guo, Yuanxiong ;
Chan-Tin, Eric ;
Gong, Yanmin .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 :1002-1012