Honest Fraction Differential Privacy

被引:0
作者
Taibi, Imane [1 ]
Ramon, Jan [1 ]
机构
[1] Univ Lille, Inria, CNRS, Cent Lille,UMR 9189 CRIStAL, Lille, France
来源
PROCEEDINGS OF THE 2024 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2024 | 2024年
关键词
Differential Privacy; Federated Learning; Security;
D O I
10.1145/3658664.3659655
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Over the last decades, differential privacy (DP) has become a standard notion of privacy. It allows to measure how much sensitive information an adversary could infer from a result (statistical model, prediction, etc.) he obtains. In privacy-preserving federated machine learning, one aims to learn a statistical model from data owned by multiple data owners without revealing their sensitive data. A common strategy is to use secure multi-party computation (SMPC) to avoid revealing intermediate results. However, DP assumes a very strong adversary who is able to know all information in the dataset except the targeted secret, while most SMPC methods assume a clearly less strong adversary, e.g., it is common to assume that the adversary has bounded computational power and can corrupt only a minority of the data owners (honest majority). As a chain is not stronger than its weakest part, in such combinations the DP provides an overly strong protection at an unnecessarily high cost in terms of utility. We propose honest fraction differential privacy, which is similar to differential privacy but assumes that the adversary can only collude with data owners covering part of the data. This assumption is very similar to the assumptions made by many SMPC strategies. We illustrate this idea by considering the application to the specific task of unregularized linear regression without bias on sufficiently large datasets.
引用
收藏
页码:247 / 251
页数:5
相关论文
共 24 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
[Anonymous], 2009, Journal of Privacy and Confidentiality, DOI DOI 10.29012/JPC.V1I1.566
[3]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[4]   Achieving Privacy-preserving Federated Learning with Irrelevant Updates over E-Health Applications [J].
Chen, Hanxiao ;
Li, Hongwei ;
Xu, Guowen ;
Zhang, Yun ;
Luo, Xizhao .
ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
[5]   Differential Privacy for Regularised Linear Regression [J].
Dandekar, Ashish ;
Basu, Debabrota ;
Bressan, Stephane .
DATABASE AND EXPERT SYSTEMS APPLICATIONS (DEXA 2018), PT II, 2018, 11030 :483-491
[6]  
Dwork C, 2016, Arxiv, DOI arXiv:1603.01887
[7]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[8]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[9]  
Evans D., 2018, Foundations and Trends in Privacy and Security, V2, P70, DOI DOI 10.1561/3300000019
[10]  
Hyland SL, 2022, Arxiv, DOI arXiv:1912.02919