Toward the Tradeoffs Between Privacy, Fairness and Utility in Federated Learning

被引:0
作者
Sun, Kangkang [1 ]
Zhang, Xiaojin [2 ]
Lin, Xi [1 ]
Li, Gaolei [1 ]
Wang, Jing [1 ]
Li, Jianhua [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai Key Lab Integrated Adm Technol Informat, Shanghai, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan, Peoples R China
来源
EMERGING INFORMATION SECURITY AND APPLICATIONS, EISA 2023 | 2024年 / 2004卷
基金
中国国家自然科学基金;
关键词
Fair and Private Federated Learning; Differential Privacy; Privacy Protection;
D O I
10.1007/978-981-99-9614-8_8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm that guarantees user privacy and prevents the risk of data leakage due to the advantage of the client's local training. Researchers have struggled to design fair FL systems that ensure fairness of results. However, the interplay between fairness and privacy has been less studied. Increasing the fairness of FL systems can have an impact on user privacy, while an increase in user privacy can affect fairness. In this work, on the client side, we use the fairness metrics, such as Demographic Parity (DemP), Equalized Odds (EOs), and Disparate Impact (DI), to construct the local fair model. To protect the privacy of the client model, we propose a privacy-protection fairness FL method. The results show that the accuracy of the fair model with privacy increases because privacy breaks the constraints of the fairness metrics. In our experiments, we conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
引用
收藏
页码:118 / 132
页数:15
相关论文
共 52 条
[41]  
Russell C, 2017, ADV NEUR IN, V30
[42]   PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage [J].
Scheliga, Daniel ;
Mader, Patrick ;
Seeland, Marco .
2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, :3605-3614
[43]  
Shao JW, 2024, Arxiv, DOI arXiv:2307.10655
[44]  
Tran C, 2021, AAAI CONF ARTIF INTE, V35, P9932
[45]  
Wang H, 2020, IEEE INFOCOM SER, P1698, DOI [10.1109/infocom41043.2020.9155494, 10.1109/INFOCOM41043.2020.9155494]
[46]  
Wang Serena, 2020, Advances in Neural Information Processing Systems, V33
[47]   Federated Learning With Differential Privacy: Algorithms and Performance Analysis [J].
Wei, Kang ;
Li, Jun ;
Ding, Ming ;
Ma, Chuan ;
Yang, Howard H. ;
Farokhi, Farhad ;
Jin, Shi ;
Quek, Tony Q. S. ;
Vincent Poor, H. .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 :3454-3469
[48]  
Wu YZ, 2024, Arxiv, DOI arXiv:2111.08211
[49]  
Xu R., 2021, arXiv
[50]   A Fairness-aware Incentive Scheme for Federated Learning [J].
Yu, Han ;
Liu, Zelei ;
Liu, Yang ;
Chen, Tianjian ;
Cong, Mingshu ;
Weng, Xi ;
Niyato, Dusit ;
Yang, Qiang .
PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, :393-399