Joint Client Selection and Privacy Compensation for Differentially Private Federated Learning

被引:0
作者
Xu, Ruichen [1 ]
Zhang, Ying-Jun Angela [1 ]
Huang, Jianwei [2 ]
机构
[1] Chinese Univ Hong Kong, Dept Informat Engn, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Sch Sci & Engn, Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518172, Peoples R China
来源
IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS, INFOCOM WKSHPS 2024 | 2024年
基金
中国国家自然科学基金;
关键词
Differential privacy; federated learning; incentive mechanism design; client selection; privacy heterogeneity;
D O I
10.1109/INFOCOMWKSHPS61880.2024.10620900
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Differentially private federated learning learns a global machine learning model from clients' distributed private data while providing privacy protections. However, clients under differential privacy protection still sustain a certain degree of potential privacy leakage, which can result in the clients' reluctance to participate in model training. Hence, both client selection and privacy compensation are important decisions to determine which clients will join the learning. In particular, since a client's privacy leakage increases with his participation frequency, the client selection decision is tightly coupled with the privacy compensation. This paper proposes a joint client selection and privacy compensation Bayesian-optimal mechanism design approach. Despite being a challenging non-convex optimization problem, we propose an efficient algorithm to solve it. We first characterize the optimal selection probability of clients with heterogeneous privacy sensitivities. This characterization significantly reduces the dimension of the problem and allows us to propose an efficient algorithm to solve the problem. Numerical results show that the proposed mechanism Pareto dominates the unbiased selection-based mechanism in terms of both test accuracy and monetary costs. Specifically, our mechanism decreases the test loss by up to 81.7% under the same monetary cost.
引用
收藏
页数:6
相关论文
共 16 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963
[3]  
Dwork C, 2006, LECT NOTES COMPUT SC, V4052, P1
[4]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[5]  
Fallah A, 2023, Arxiv, DOI arXiv:2201.03968
[6]  
Ghosh A., 2011, ACM C EL COMM
[7]  
Karimireddy SP, 2020, PR MACH LEARN RES, V119
[8]  
Krishna Vijay, 2009, AUCTION THEORY
[9]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[10]   Federated Learning: Challenges, Methods, and Future Directions [J].
Li, Tian ;
Sahu, Anit Kumar ;
Talwalkar, Ameet ;
Smith, Virginia .
IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) :50-60