Ensuring Fairness and Gradient Privacy in Personalized Heterogeneous Federated Learning

被引:5
作者
Lewis, Cody [1 ]
Varadharajan, Vijay [1 ]
Noman, Nasimul [1 ]
Tupakula, Uday [1 ,2 ]
机构
[1] Univ Newcastle, Adv Cyber Secur Engn Res Ctr ACSRC, Univ Dr, Newcastle, NSW 2308, Australia
[2] Univ New England, Elm Ave, Armidale, NSW 2351, Australia
关键词
Federated learning; device heterogeneity; fairness; privacy;
D O I
10.1145/3652613
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the increasing tension between conflicting requirements of the availability of large amounts of data for effective machine learning-based analysis, and for ensuring their privacy, the paradigm of federated learning has emerged, a distributed machine learning setting where the clients provide only the machine learning model updates to the server rather than the actual data for decision making. However, the distributed nature of federated learning raises specific challenges related to fairness in a heterogeneous setting. This motivates the focus of our article, on the heterogeneity of client devices having different computational capabilities and their impact on fairness in federated learning. Furthermore, our aim is to achieve fairness in heterogeneity while ensuring privacy. As far as we are aware there are no existing works that address all three aspects of fairness, device heterogeneity, and privacy simultaneously in federated learning. In this article, we propose a novel federated learning algorithm with personalization in the context of heterogeneous devices while main-taining compatibility with the gradient privacy preservation techniques of secure aggregation. We analyze the proposed federated learning algorithm under different environments with different datasets and show that it achieves performance close to or greater than the state-of-the-art in heterogeneous device personal-ized federated learning. We also provide theoretical proofs for the fairness and convergence properties of our proposed algorithm.
引用
收藏
页数:30
相关论文
共 55 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Arnold S, 2021, Arxiv, DOI arXiv:2103.11226
[3]   CONTRA: Defending Against Poisoning Attacks in Federated Learning [J].
Awan, Sana ;
Luo, Bo ;
Li, Fengjun .
COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 :455-475
[4]  
Bello I, 2021, ADV NEUR IN, V34
[5]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[6]  
Chen W, 2020, Arxiv, DOI arXiv:2004.03657
[7]  
Chrabaszcz P, 2017, Arxiv, DOI [arXiv:1707.08819, DOI 10.48550/ARXIV.1707.08819]
[8]   Machine Learning Paradigms for Speech Recognition: An Overview [J].
Deng, Li ;
Li, Xiao .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2013, 21 (05) :1060-1089
[9]  
Diao E., 2020, INT C LEARNING REPRE
[10]  
Divi S., 2021, arXiv