Mitigating Unfairness in Differentially-Private Federated Learning

被引:0
作者
Du, Bingqian [1 ]
Xiang, Liyao [2 ]
Wu, Chuan [3 ]
机构
[1] Huazhong Univ Sci & Technol, Wuhan, Peoples R China
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[3] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; fairness; differential privacy;
D O I
10.1145/3725847
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning is a new learning paradigm which utilizes crowdsourced data stored at dispersed user devices (aka clients) to learn a global model. Studies have shown that even though data are kept on local devices, an adversary is still able to infer client information during the training process or from the learned model. Differential privacy has recently been introduced to deep learning model training, to protect data privacy of clients. Nonetheless, it exacerbates unfairness with the learned model among participating clients due to its uniform clipping and noise addition, even when the training loss function explicitly considers unfairness. To validate the impact of the differential privacy mechanism in federated learning, we carefully approximate the correlation between fairness performance across clients and the fundamental operations within the differential privacy mechanism and quantify the influence of differential privacy mechanisms on model performance across various clients. Subsequently, leveraging our theoretical findings regarding the effect of the differential privacy mechanism, we formulate the unfairness mitigation problem and propose an algorithm based on the modified method of differential multipliers. Extensive evaluation shows that our method outperforms state-of-the-art differentially private federated learning algorithm by about 30% for noni.i.d. data distribution in terms of the variance of model performance across clients.
引用
收藏
页数:152
相关论文
共 40 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[3]  
Bagdasaryan E, 2019, ADV NEUR IN, V32
[4]  
Balle Borja, 2018, Proceedings of the 35th International Conference on Machine Learning
[5]  
Bhowmick A, 2019, Arxiv, DOI arXiv:1812.00984
[6]  
Brendan McMahan H., 2018, INT C LEARNING REPRE
[7]   Federated learning of predictive models from federated Electronic Health Records [J].
Brisimi, Theodora S. ;
Chen, Ruidi ;
Mela, Theofanie ;
Olshevsky, Alex ;
Paschalidis, Ioannis Ch. ;
Shi, Wei .
INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2018, 112 :59-67
[8]   On the Compatibility of Privacy and Fairness [J].
Cummings, Rachel ;
Gupta, Varun ;
Kimpara, Dhamma ;
Morgenstern, Jamie .
ADJUNCT PUBLICATION OF THE 27TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION (ACM UMAP '19 ADJUNCT), 2019, :309-315
[9]  
Du W, 2021, P 2021 SIAM INT C DA, P181, DOI DOI 10.1137/1.9781611976700.21
[10]  
Dwork C., 2012, P 3 INNOVATIONS THEO, P214, DOI [DOI 10.1145/2090236.2090255, 10.1145/2090236.2090255]