Unexpected Information Leakage of Differential Privacy Due to the Linear Property of Queries

被引:5
作者
Huang, Wen [1 ]
Zhou, Shijie [1 ]
Liao, Yongjian [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu 610054, Peoples R China
关键词
Privacy; Differential privacy; Sensitivity; Correlation; Testing; National Institutes of Health; Switches; Laplace mechanism; membership inference attacks; differential privacy; linear property;
D O I
10.1109/TIFS.2021.3075843
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Differential privacy is a widely accepted concept of privacy preservation, and the Laplace mechanism is a famous instance of differentially private mechanisms used to deal with numerical data. In this paper, we find that differential privacy does not take the linear property of queries into account, resulting in unexpected information leakage. Specifically, the linear property makes it possible to divide one query into two queries, such as q(D) = q(D-1)+ q(D-2) if D = D-1 boolean OR D-2 and D-1 boolean OR D-2 = phi. If attackers try to obtain an answer to q(D), they can not only issue the query q(D) but also issue q(D-1) and calculate q(D-2) by themselves as long as they know D-2. Through different divisions of one query, attackers can obtain multiple different answers to the same query from differentially private mechanisms. However, from the attackers' perspective and differentially private mechanisms' perspective, the total consumed privacy budget is different if divisions are delicately designed. This difference leads to unexpected information leakage because the privacy budget is the key parameter for controlling the amount of information that is legally released from differentially private mechanisms. To demonstrate unexpected information leakage, we present a membership inference attack against the Laplace mechanism. Specifically, under the constraints of differential privacy, we propose a method for obtaining multiple independent identically distributed samples of answers to queries that satisfy the linear property. The proposed method is based on a linear property and some background knowledge of the attackers. When the background knowledge is sufficient, the proposed method can obtain a sufficient number of samples from differentially private mechanisms such that the total consumed privacy budget can be made unreasonably large. Based on the obtained samples, a hypothesis testing method is used to determine whether a target record is in a target dataset.
引用
收藏
页码:3123 / 3137
页数:15
相关论文
共 50 条
  • [21] On Sparse Linear Regression in the Local Differential Privacy Model
    Wang, Di
    Xu, Jinhui
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2021, 67 (02) : 1182 - 1200
  • [22] Contextual Linear Types for Differential Privacy
    Toro, Matias
    Darais, David
    Abuah, Chike
    Near, Joseph P.
    Arquez, Damian
    Olmedo, Federico
    Tanter, Eric
    ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS, 2023, 45 (02):
  • [23] Differential Privacy for Regularised Linear Regression
    Dandekar, Ashish
    Basu, Debabrota
    Bressan, Stephane
    DATABASE AND EXPERT SYSTEMS APPLICATIONS (DEXA 2018), PT II, 2018, 11030 : 483 - 491
  • [24] Linear Dependent Types for Differential Privacy
    Gaboardi, Marco
    Haeberlen, Andreas
    Hsu, Justin
    Narayan, Arjun
    Pierce, Benjamin C.
    ACM SIGPLAN NOTICES, 2013, 48 (01) : 357 - 370
  • [25] Mycelium: Large-Scale Distributed Graph Queries with Differential Privacy
    Roth, Edo
    Newatia, Karan
    Ma, Yiping
    Zhong, Ke
    Angel, Sebastian
    Haeberlen, Andreas
    PROCEEDINGS OF THE 28TH ACM SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES, SOSP 2021, 2021, : 327 - 343
  • [26] Optimizing the Numbers of Queries and Replies in Convex Federated Learning With Differential Privacy
    Zhou, Yipeng
    Liu, Xuezheng
    Fu, Yao
    Wu, Di
    Wang, Jessie Hui
    Yu, Shui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 4823 - 4837
  • [27] Does Differential Privacy Really Protect Federated Learning From Gradient Leakage Attacks?
    Hu, Jiahui
    Du, Jiacheng
    Wang, Zhibo
    Pang, Xiaoyi
    Zhou, Yajie
    Sun, Peng
    Ren, Kui
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 12635 - 12649
  • [28] An Adaptive Differential Privacy Algorithm for Range Queries over Healthcare Data
    Alnemari, Asma
    Romanowski, Carol J.
    Raj, Rajendra K.
    2017 IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI), 2017, : 397 - 402
  • [29] A Data Leakage Traceability Scheme Based on Differential Privacy and Fingerprint
    Wang, Mingyong
    Zheng, Shuli
    2024 3RD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND MEDIA COMPUTING, ICIPMC 2024, 2024, : 327 - 334
  • [30] Botnet detection and information leakage mitigation with differential privacy under generative adversarial networks
    Feizi, Sanaz
    Ghaffari, Hamidreza
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2025, 28 (02):