Stochastic ADMM Based Distributed Machine Learning with Differential Privacy

被引:9
作者
Ding, Jiahao [1 ]
Errapotu, Sai Mounika [1 ]
Zhang, Haijun [2 ]
Gong, Yanmin [3 ]
Pan, Miao [1 ]
Han, Zhu [1 ]
机构
[1] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77204 USA
[2] Univ Sci & Technol Beijing, Dept Commun Engn, Beijing 100083, Peoples R China
[3] Univ Texas San Antonio, Dept Elect & Comp Engn, San Antonio, TX 78249 USA
来源
SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, SECURECOMM, PT I | 2019年 / 304卷
基金
北京市自然科学基金; 美国国家科学基金会; 中国国家自然科学基金;
关键词
Differential privacy; Distributed machine learning; Stochastic ADMM; Moments accountant; Distributed optimization; Privacy;
D O I
10.1007/978-3-030-37228-6_13
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While embracing various machine learning techniques to make effective decisions in the big data era, preserving the privacy of sensitive data poses significant challenges. In this paper, we develop a privacy-preserving distributed machine learning algorithm to address this issue. Given the assumption that each data provider owns a dataset with different sample size, our goal is to learn a common classifier over the union of all the local datasets in a distributed way without leaking any sensitive information of the data samples. Such an algorithm needs to jointly consider efficient distributed learning and effective privacy preservation. In the proposed algorithm, we extend stochastic alternating direction method of multipliers (ADMM) in a distributed setting to do distributed learning. For preserving privacy during the iterative process, we combine differential privacy and stochastic ADMM together. In particular, we propose a novel stochastic ADMM based privacy-preserving distributed machine learning (PS-ADMM) algorithm by perturbing the updating gradients, that provide differential privacy guarantee and have a low computational cost. We theoretically demonstrate the convergence rate and utility bound of our proposed PS-ADMM under strongly convex objective. Through our experiments performed on real-world datasets, we show that PS-ADMM outperforms other differentially private ADMM algorithms under the same differential privacy guarantee.
引用
收藏
页码:257 / 277
页数:21
相关论文
共 23 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Bekkerman R., 2011, Scaling up machine learning: Parallel and distributed approaches
  • [3] Bertsekas DP., 1989, Parallel and Distributed Computation: Numerical Methods
  • [4] Distributed optimization and statistical learning via the alternating direction method of multipliers
    Boyd S.
    Parikh N.
    Chu E.
    Peleato B.
    Eckstein J.
    [J]. Foundations and Trends in Machine Learning, 2010, 3 (01): : 1 - 122
  • [5] Dua D., 2017, Uci machine learning repository
  • [6] Calibrating noise to sensitivity in private data analysis
    Dwork, Cynthia
    McSherry, Frank
    Nissim, Kobbi
    Smith, Adam
    [J]. THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 : 265 - 284
  • [7] The Algorithmic Foundations of Differential Privacy
    Dwork, Cynthia
    Roth, Aaron
    [J]. FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4): : 211 - 406
  • [8] Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
    Fredrikson, Matt
    Jha, Somesh
    Ristenpart, Thomas
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1322 - 1333
  • [9] Simulation and Performance Evaluation of Three Types of ISR Coding Systems
    Guo, Yajing
    Yao, Ming
    Yuan, Kai
    Deng, Xiaohua
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2018, 15 (07) : 1016 - 1019
  • [10] Differentially Private Distributed Constrained Optimization
    Han, Shuo
    Topcu, Ufuk
    Pappas, George J.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (01) : 50 - 64