Exploring Homomorphic Encryption and Differential Privacy Techniques towards Secure Federated Learning Paradigm

被引:37
作者
Aziz, Rezak [1 ]
Banerjee, Soumya [1 ]
Bouzefrane, Samia [1 ]
Vinh, Thinh Le [2 ]
机构
[1] Cnam, CEDRIC Lab, 292 Rue St Martin, F-75003 Paris, France
[2] Ho Chi Minh City Univ Technol & Educ, Fac Informat Technol, Ho Chi Minh City, Vietnam
关键词
federated learning; differential privacy; homomorphic encryption; privacy; accuracy; CHALLENGES;
D O I
10.3390/fi15090310
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The trend of the next generation of the internet has already been scrutinized by top analytics enterprises. According to Gartner investigations, it is predicted that, by 2024, 75% of the global population will have their personal data covered under privacy regulations. This alarming statistic necessitates the orchestration of several security components to address the enormous challenges posed by federated and distributed learning environments. Federated learning (FL) is a promising technique that allows multiple parties to collaboratively train a model without sharing their data. However, even though FL is seen as a privacy-preserving distributed machine learning method, recent works have demonstrated that FL is vulnerable to some privacy attacks. Homomorphic encryption (HE) and differential privacy (DP) are two promising techniques that can be used to address these privacy concerns. HE allows secure computations on encrypted data, while DP provides strong privacy guarantees by adding noise to the data. This paper first presents consistent attacks on privacy in federated learning and then provides an overview of HE and DP techniques for secure federated learning in next-generation internet applications. It discusses the strengths and weaknesses of these techniques in different settings as described in the literature, with a particular focus on the trade-off between privacy and convergence, as well as the computation overheads involved. The objective of this paper is to analyze the challenges associated with each technique and identify potential opportunities and solutions for designing a more robust, privacy-preserving federated learning framework.
引用
收藏
页数:25
相关论文
共 76 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Albrecht M., 2018, Technical report
[3]  
[Anonymous], 2014, Gartner Identifies the Top 10 Strategic Technology Trends for 2015
[4]  
Ateniese Giuseppe, 2015, International Journal of Security and Networks, V10, P137, DOI 10.1504/ijsn.2015.071829
[5]  
Bhowmick A, 2019, Arxiv, DOI arXiv:1812.00984
[6]   PROCHLO: Strong Privacy for Analytics in the Crowd [J].
Bittau, Andrea ;
Erlingsson, Ulfar ;
Maniatis, Petros ;
Mironov, Ilya ;
Raghunathan, Ananth ;
Lie, David ;
Rudominer, Mitch ;
Kode, Ushasree ;
Tinnes, Julien ;
Seefeld, Bernhard .
PROCEEDINGS OF THE TWENTY-SIXTH ACM SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES (SOSP '17), 2017, :441-459
[7]  
McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963
[8]  
Chamikara MAP, 2022, Arxiv, DOI arXiv:2202.06053
[9]   Beyond Model-Level Membership Privacy Leakage: an Adversarial Approach in Federated Learning [J].
Chen, Jiale ;
Zhang, Jiale ;
Zhao, Yanchao ;
Han, Hao ;
Zhu, Kun ;
Chen, Bing .
2020 29TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2020), 2020,
[10]  
Choudhury O, 2020, Arxiv, DOI arXiv:1910.02578