Toward the Tradeoffs Between Privacy, Fairness and Utility in Federated Learning

被引:0
作者
Sun, Kangkang [1 ]
Zhang, Xiaojin [2 ]
Lin, Xi [1 ]
Li, Gaolei [1 ]
Wang, Jing [1 ]
Li, Jianhua [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai Key Lab Integrated Adm Technol Informat, Shanghai, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan, Peoples R China
来源
EMERGING INFORMATION SECURITY AND APPLICATIONS, EISA 2023 | 2024年 / 2004卷
基金
中国国家自然科学基金;
关键词
Fair and Private Federated Learning; Differential Privacy; Privacy Protection;
D O I
10.1007/978-981-99-9614-8_8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm that guarantees user privacy and prevents the risk of data leakage due to the advantage of the client's local training. Researchers have struggled to design fair FL systems that ensure fairness of results. However, the interplay between fairness and privacy has been less studied. Increasing the fairness of FL systems can have an impact on user privacy, while an increase in user privacy can affect fairness. In this work, on the client side, we use the fairness metrics, such as Demographic Parity (DemP), Equalized Odds (EOs), and Disparate Impact (DI), to construct the local fair model. To protect the privacy of the client model, we propose a privacy-protection fairness FL method. The results show that the accuracy of the fair model with privacy increases because privacy breaks the constraints of the fairness metrics. In our experiments, we conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
引用
收藏
页码:118 / 132
页数:15
相关论文
共 52 条
[31]  
Kilbertus N, 2018, PR MACH LEARN RES, V80
[32]  
Lamy A, 2019, ADV NEUR IN, V32
[33]  
Li T, 2020, Arxiv, DOI arXiv:1905.10497
[34]  
Lowy A., 2023, WORKSHOP ALGORITHMIC, P86
[35]  
Lowy A, 2023, PR MACH LEARN RES, V206
[36]  
Martinez Natalia, 2020, Proc Mach Learn Res, V119, P6755
[37]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[38]  
Mozannar H., 2020, P INT C MACH LEARN, P7066
[39]  
Padala M, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2277
[40]   Fair Decision Making Using Privacy-Protected Data [J].
Pujol, David ;
McKenna, Ryan ;
Kuppam, Satya ;
Hay, Michael ;
Machanavajjhala, Ashwin ;
Miklau, Gerome .
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, :189-199