A new approach for cross-silo federated learning and its privacy risks

被引:3
作者
Fontana, Michele [1 ]
Naretto, Francesca [2 ]
Monreale, Anna [1 ]
机构
[1] Univ Pisa, Pisa, Italy
[2] Scuola Normale Super Pisa, Pisa, Italy
来源
2021 18TH INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST (PST) | 2021年
关键词
Federated Learning; Privacy risk assessment;
D O I
10.1109/PST52912.2021.9647753
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning has witnessed an increasing popularity in the past few years for its ability to train Machine Learning models in critical contexts, using private data without moving them. Most of the approaches in the literature are focused on mobile environments, where mobile devices contain the data of single users, and typically deal with images or text data. In this paper, we define HOLDA, a novel federated learning approach tailored for training machine learning models on data distributed over federated organizations hierarchically organized. Our method focuses on the generalization capabilities of the neural network models, providing a new mechanism for selecting their best weights. In addition, it is tailored for tabular data. We empirically test the performance of our approach on two different tabular datasets, showing excellent results in terms of performance and generalization capabilities. Then, we also tackle the problem of assessing the privacy risk of users represented in the training data. In particular, we empirically show, by attacking the HOLDA models with the Membership Inference Attack, that the privacy of the users in the training data may have high risk.
引用
收藏
页数:10
相关论文
共 22 条
[1]   Privacy-Preserving Machine Learning: Threats and Solutions [J].
Al-Rubaie, Mohammad ;
Chang, J. Morris .
IEEE SECURITY & PRIVACY, 2019, 17 (02) :49-58
[2]  
Alistarh D., 2017, ADV NEURAL INFORMA T
[3]  
Caldas S., 2018, ARXIV181207210
[4]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[5]  
Fredrikson M, 2014, PROCEEDINGS OF THE 23RD USENIX SECURITY SYMPOSIUM, P17
[6]   Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations [J].
Ganju, Karan ;
Wang, Qi ;
Yang, Wei ;
Gunter, Carl A. ;
Borisov, Nikita .
PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, :619-633
[7]  
Hard Andrew, 2018, P 2018 C EMPIRICAL M
[8]   Secure, privacy-preserving and federated machine learning in medical imaging [J].
Kaissis, Georgios A. ;
Makowski, Marcus R. ;
Ruckert, Daniel ;
Braren, Rickmer F. .
NATURE MACHINE INTELLIGENCE, 2020, 2 (06) :305-311
[9]  
Karimireddy Sai Praneeth, 2019, CoRR, abs/1910.06378
[10]  
Konecny J., 2016, P NIPS WORKSH PRIV M