Mitigating Unfairness in Differentially-Private Federated Learning

被引:0
作者
Du, Bingqian [1 ]
Xiang, Liyao [2 ]
Wu, Chuan [3 ]
机构
[1] Huazhong Univ Sci & Technol, Wuhan, Peoples R China
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[3] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; fairness; differential privacy;
D O I
10.1145/3725847
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning is a new learning paradigm which utilizes crowdsourced data stored at dispersed user devices (aka clients) to learn a global model. Studies have shown that even though data are kept on local devices, an adversary is still able to infer client information during the training process or from the learned model. Differential privacy has recently been introduced to deep learning model training, to protect data privacy of clients. Nonetheless, it exacerbates unfairness with the learned model among participating clients due to its uniform clipping and noise addition, even when the training loss function explicitly considers unfairness. To validate the impact of the differential privacy mechanism in federated learning, we carefully approximate the correlation between fairness performance across clients and the fundamental operations within the differential privacy mechanism and quantify the influence of differential privacy mechanisms on model performance across various clients. Subsequently, leveraging our theoretical findings regarding the effect of the differential privacy mechanism, we formulate the unfairness mitigation problem and propose an algorithm based on the modified method of differential multipliers. Extensive evaluation shows that our method outperforms state-of-the-art differentially private federated learning algorithm by about 30% for noni.i.d. data distribution in terms of the variance of model performance across clients.
引用
收藏
页数:152
相关论文
共 40 条
[21]  
Konečny J, 2017, Arxiv, DOI [arXiv:1610.05492, DOI 10.48550/ARXIV.1610.05492]
[22]  
Krizhevsky A., 2009, Learning multiple layers of features from tiny images
[23]   Federated Learning: Challenges, Methods, and Future Directions [J].
Li, Tian ;
Sahu, Anit Kumar ;
Talwalkar, Ameet ;
Smith, Virginia .
IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) :50-60
[24]  
Li Tian, 2019, PROC INT C LEARN REP, P2
[25]  
Ling XP, 2024, Arxiv, DOI [arXiv:2402.16028, DOI 10.48550/ARXIV.2402.16028]
[26]   How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning [J].
Lyu, Lingjuan ;
Li, Yitong ;
Nandakumar, Karthik ;
Yu, Jiangshan ;
Ma, Xingjun .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (02) :1003-1017
[27]   On Safeguarding Privacy and Security in the Framework of Federated Learning [J].
Ma, Chuan ;
Li, Jun ;
Ding, Ming ;
Yang, Howard H. ;
Shu, Feng ;
Quek, Tony Q. S. ;
Poor, H. Vincent .
IEEE NETWORK, 2020, 34 (04) :242-248
[28]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[29]  
Mohri M, 2019, PR MACH LEARN RES, V97
[30]  
Shamir O, 2014, PR MACH LEARN RES, V32, P1000