共 50 条
Concentrated Differentially Private Federated Learning With Performance Analysis
被引:20
|作者:
Hu, Rui
[1
]
Guo, Yuanxiong
[2
]
Gong, Yanmin
[1
]
机构:
[1] Univ Texas San Antonio, Dept Elect & Comp Engn, San Antonio, TX 78249 USA
[2] Univ Texas San Antonio, Dept Informat Syst & Cyber Secur, San Antonio, TX 78249 USA
来源:
IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY
|
2021年
/
2卷
基金:
美国国家科学基金会;
关键词:
Collaborative work;
Servers;
Privacy;
Data models;
Computational modeling;
Training;
Convergence;
Federated learning;
security and privacy;
convergence analysis;
zero-concentrated differential privacy;
ATTACKS;
D O I:
10.1109/OJCS.2021.3099108
中图分类号:
TP3 [计算技术、计算机技术];
学科分类号:
0812 ;
摘要:
Federated learning engages a set of edge devices to collaboratively train a common model without sharing their local data and has advantage in user privacy over traditional cloud-based learning approaches. However, recent model inversion attacks and membership inference attacks have demonstrated that shared model updates during the interactive training process could still leak sensitive user information. Thus, it is desirable to provide rigorous differential privacy (DP) guarantee in federated learning. The main challenge to providing DP is to maintain high utility of federated learning model with repeatedly introduced randomness of DP mechanisms, especially when the server is not fully trusted. In this paper, we investigate how to provide DP to the most widely adopted federated learning scheme, federated averaging. Our approach combines local gradient perturbation, secure aggregation, and zero-concentrated differential privacy (zCDP) for better utility and privacy protection without a trusted server. We jointly consider the performance impacts of randomnesses introduced by the DP mechanism, client sampling and data subsampling in our approach, and theoretically analyze the convergence rate and end-to-end DP guarantee with non-convex loss functions. We also demonstrate that our proposed method has good utility-privacy trade-off through extensive numerical experiments on the real-world dataset.
引用
收藏
页码:276 / 289
页数:14
相关论文