PLFa-FL: Personalized Local Differential Privacy for Fair Federated Learning

被引:0
作者
Cai, Hongyun [1 ,2 ]
Zhang, Meiling [1 ,2 ]
Wang, Shiyun [1 ,2 ]
Zhao, Ao [1 ,2 ]
Zhang, Yu [1 ,2 ]
机构
[1] Hebei Univ, Sch Cyber Secur & Comp, Baoding 071000, Hebei, Peoples R China
[2] Hebei Univ, Key Lab High Trusted Informat Syst Hebei Prov, Baoding 071000, Hebei, Peoples R China
来源
PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024 | 2024年
关键词
Federated learning; privacy leakage; personalized local differential privacy; fair sampling;
D O I
10.1109/CSCWD61410.2024.10580666
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In federated learning, the clients can adopt local differential privacy on the uploaded gradients or parameters, to reduce the potential risk of privacy leakage. However, most existing solutions set the uniform privacy level for all clients, which cannot meet the individual different privacy needs of users. Moreover, a few schemes introduced personalized differential privacy, but they ignored the data utility of the whole system and the issue of fair client sampling. In this paper, we propose a novel framework of fair federated learning with personalized local differential privacy (PLFa-FL), which can achieve fair client sampling while balancing the privacy and data utility. First, we propose a fair sampling mechanism that combines the client's local loss value and historical participation results. Then, we consider the impact of privacy budget threshold on model performance. To balance the privacy and data utility, we design the privacy budget waste function to determine the optimized privacy budget threshold and the clients' used privacy budget. Experiments on MNIST and EMNIST datasets confirm that PLFa-FL compares favorably against the baseline methods in terms of model performance, running time, and fairness.
引用
收藏
页码:2325 / 2332
页数:8
相关论文
共 22 条
[1]  
Avent B., 2019, J PRIV CONFIDENTIALI, V9, P1
[2]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[3]  
Cho Y. J., 2020, CORR ABS 2010 01243
[4]  
Ge Yang, 2021, 2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS), P484, DOI 10.1109/ICCCS52626.2021.9449232
[5]  
Geyer R. C., 2017, CORR ABS 1712 07557
[6]   An adaptive kernelized rank-order distance for clustering non-spherical data with high noise [J].
Huang, Tianyi ;
Wang, Shiping ;
Zhu, William .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2020, 11 (08) :1735-1747
[7]   Total variation distance privacy: Accurately measuring inference attacks and improving utility [J].
Jia, Jingyu ;
Tan, Chang ;
Liu, Zhewei ;
Li, Xinhao ;
Liu, Zheli ;
Lv, Siyi ;
Dong, Changyu .
INFORMATION SCIENCES, 2023, 626 :537-558
[8]  
Konecny J., 2016, CORR ABS 1610 02527
[9]   Federated Learning on Non-IID Data Silos: An Experimental Study [J].
Li, Qinbin ;
Diao, Yiqun ;
Chen, Quan ;
He, Bing Sheng .
2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, :965-978
[10]  
Li T., 2020, 8TH INTERNATIONAL CO