Differentially Private Distributed Multi-Task Relationship Learning

被引:1
作者
Tin, Kaizhong [1 ]
Cheng, Xiang [1 ]
Ma, Yuchao [1 ]
机构
[1] Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing, Peoples R China
来源
2022 31ST INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2022) | 2022年
关键词
distributed multi-task relationship learning; differential privacy; variance reduction; task relationship calibration; NOISE;
D O I
10.1109/ICCCN54977.2022.9868915
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In many real-world applications, data are distributed across different geographical regions, and may come from different distributions which result in multiple learning tasks. In such cases, distributed multi-task learning is usually used to learn the multiple related tasks to improve the generalization performance for each task. Among the distributed multi-task learning algorithms, distributed multi-task relationship learning (DMTRL) attracts much attention in the community as it learns task relationships from data, instead of imposing a prior task relatedness assumption. To perform DMTRL, task model or its gradient is transferred between task node and central server. There will be a potential privacy violation when the distributed data are possessed by different institutes and contain sensitive information (e.g., medical records). In this paper, we propose a distributed multi-task relationship learning approach under differential privacy called DRUPE, where privacy protection is achieved through perturbing the gradients at each task node. In particular, to reduce the high variance of the perturbed gradients and achieve fast convergence rate, we develop a variance reduction based gradient calibration method, which first estimates the gradient error from the previous perturbed gradients and then calibrates the current perturbed gradient by subtracting the gradient error term. Moreover, to alleviate the negative effect caused by the inaccurate task relationships that are inferred from the private task models, we present a task relationship calibration method which uses the least-squares approximation algorithm to calibrate the inaccurate pairwise relationships between the tasks. Experimental results on realworld datasets confirm the effectiveness of our approach.
引用
收藏
页数:10
相关论文
共 31 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Amin K., 2019, NIPS, p14 213
[3]  
Boyd S., 2004, Convex optimization
[4]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[5]   Beyond Model-Level Membership Privacy Leakage: an Adversarial Approach in Federated Learning [J].
Chen, Jiale ;
Zhang, Jiale ;
Zhao, Yanchao ;
Han, Hao ;
Zhu, Kun ;
Chen, Bing .
2020 29TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2020), 2020,
[6]  
De S, 2016, IEEE DATA MINING, P111, DOI [10.1109/ICDM.2016.0022, 10.1109/ICDM.2016.177]
[7]  
Defazio A, 2016, ADV NEUR IN, V29
[8]  
Defazio A, 2014, ADV NEUR IN, V27
[9]  
Dwork C, 2006, LECT NOTES COMPUT SC, V4004, P486
[10]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284