Differentially Private Federated Learning with Heterogeneous Group Privacy

被引:1
作者
Jiang, Mingna [1 ,2 ]
Wei, Linna [1 ,2 ]
Cai, Guoyue [1 ,2 ]
Wu, Xuangou [1 ,2 ]
机构
[1] Anhui Univ Technol, Sch Comp Sci & Technol, Maanshan, Peoples R China
[2] Anhui Engn Res Ctr Intelligent Applicat & Secur I, Maanshan, Peoples R China
来源
2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS | 2024年
关键词
Federated Learning; Data privacy; Heterogeneous Differential Privacy; Data heterogeneity; ATTACKS;
D O I
10.1109/iThings-GreenCom-CPSCom-SmartData-Cybermatics60724.2023.00047
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) can collaboratively learn Machine Learning (ML) models without sharing private data, effectively protecting data privacy. Empirical studies have shown that model parameters can also lead to privacy leakage. Differential Privacy (DP) has become a promising privacy-preserving solution to ensuring client privacy by introducing noise into the model parameters. However, a trade-off exists between the degree of privacy protection and model performance in DP-enhanced FL. Existing works assign the same level of privacy protection to all clients, ignoring the privacy requirements of clients. To address this challenge, we propose FedHGP, a differentially private FL approach with heterogeneous group privacy. Our approach divides clients into groups and applies Heterogeneous Differential Privacy (HDP) to each group individually. First, FedHGP groups clients into different levels of privacy protection based on the similarity of their data distributions. Then, various groups are added at varying noise levels and perform model aggregation separately. Finally, we conduct experiments on three benchmark datasets. The results show that our approach can effectively improve model performance when clients' data and privacy needs are heterogeneous.
引用
收藏
页码:143 / 150
页数:8
相关论文
共 34 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Alaggan M, 2015, Arxiv, DOI arXiv:1504.06998
[3]  
McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963
[4]  
Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267
[5]   Secure Federated Matrix Factorization [J].
Chai, Di ;
Wang, Leye ;
Chen, Kai ;
Yang, Qiang .
IEEE INTELLIGENT SYSTEMS, 2021, 36 (05) :11-19
[6]  
Chathoth A. K., 2022, 2022 IEEE INT C BIG, P5682
[7]   The Novel Location Privacy-Preserving CKD for Mobile Crowdsourcing Systems [J].
chi, Zhongyang ;
Wang, Yingjie ;
Huang, Yan ;
Tong, Xiangrong .
IEEE ACCESS, 2018, 6 :5678-5687
[8]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[9]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[10]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333