Personalized Differentially Private Federated Learning without Exposing Privacy Budgets

被引:5
作者
Liu, Junxu [1 ]
Lou, Jian [2 ]
Xiong, Li [3 ]
Meng, Xiaofeng [1 ]
机构
[1] Renmin Univ China, Beijing, Peoples R China
[2] Zhejiang Univ, Hangzhou, Peoples R China
[3] Emory Univ, Atlanta, GA USA
来源
PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023 | 2023年
基金
美国国家科学基金会;
关键词
Differential Privacy; Federated Learning; Personalized Privacy;
D O I
10.1145/3583780.3615247
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The meteoric rise of cross-silo Federated Learning (FL) is due to its ability to mitigate data breaches during collaborative training. To further provide rigorous privacy protection with consideration of the varying privacy requirements across different clients, a privacy-enhanced line of work on personalized differentially private federated learning (PDP-FL) has been proposed. However, the existing solution for PDP-FL [21] assumes the raw privacy budgets of all clients should be collected by the server. These values are then directly utilized to improve the model utility via facilitating the privacy preferences partitioning (i.e., partitioning all clients into multiple privacy groups). It is however non-realistic because the raw privacy budgets can be quite informative and sensitive. In this work, our goal is to achieve PDP-FL without exposing clients' raw privacy budgets by indirectly partitioning the privacy preferences solely based on clients' noisy model updates. The crux lies in the fact that the noisy updates could be influenced by two entangled factors of DP noises and non-IID clients' data, leaving it unknown whether it is possible to uncover privacy preferences by disentangling the two affecting factors. To overcome the hurdle, we systematically investigate the unexplored question of under what conditions can the model updates of clients be primarily influenced by noise levels rather than data distribution. Then, we propose a simple yet effective strategy based on clustering the L-2 norm of the noisy updates, which can be integrated into the vanilla PDP-FL to maintain the same performance. Experimental results demonstrate the effectiveness and feasibility of our privacy-budget-agnostic PDP-FL method.
引用
收藏
页码:4140 / 4144
页数:5
相关论文
共 34 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Agarwal N, 2018, ADV NEUR IN, V31
[3]  
Agarwal N, 2021, ADV NEUR IN
[4]  
Chen X., 2020, NEURIPS
[5]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[6]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[7]  
Geyer RC., 2017, ABS171207557 CORR
[8]  
Gu Xin, 2023, ARXIV230301256
[9]  
Hanzely Filip, 2020, NEURIPS
[10]  
Huang J., 2005, CMU Technique report, V18