Frequency Estimation Mechanisms Under (ϵ,δ)-Utility-Optimized Local Differential Privacy

被引:4
作者
Zhang, Yue [1 ]
Zhu, Youwen [1 ]
Zhou, Yuqian [2 ]
Yuan, Jiabin [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Jiangsu, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut, Nanjing 211106, Jiangsu, Peoples R China
关键词
Frequency estimation; Differential privacy; Privacy; Data models; Estimation error; Data analysis; Standards; Utility-optimized local differential privacy; data collection; frequency estimation; RECOMMENDATION;
D O I
10.1109/TETC.2023.3238839
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Frequency estimation mechanisms are widely applied in domains such as machine learning and cloud computing, where it is desirable to provide statistical information. As a fundamental operation in these domains, frequency estimation utilizes personal data which contains sensitive information while it is necessary to protect sensitive information from others. Motivated by this, we preserve user's privacy with local differential privacy by obfuscating personal data on the user side. In this paper, we propose frequency estimation mechanisms under utility-optimized local differential privacy (ULDP), which allow the data collector to obtain some non-sensitive values to improve data utility while protecting sensitive values from leaking sensitive information. We propose three frequency estimation mechanisms under (& varepsilon;,delta) -ULDP (uRFM-GRR, uRFM-RAPPOR, uRFM-OLH) to preserve user's sensitive information. Our proposed mechanisms protect sensitive data with the same privacy guarantee and they are suitable for different scenarios. Besides, in theory, we compare the estimation errors of our proposed mechanisms with existing LDP based mechanisms and show that ours are lower than theirs. Finally, we conduct experiments on synthetic and real-world datasets to evaluate the performance of the three mechanisms. The experimental results demonstrate that our proposed mechanisms are better than the existing LDP based solutions over the same privacy level, while uRFM-OLH frequently performs the best.
引用
收藏
页码:316 / 327
页数:12
相关论文
共 33 条
  • [1] A Differential Privacy Team, 2017, Tech. Rep.
  • [2] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [3] Bassily R, 2019, PR MACH LEARN RES, V89, P721
  • [4] Local, Private, Efficient Protocols for Succinct Histograms
    Bassily, Raef
    Smith, Adam
    [J]. STOC'15: PROCEEDINGS OF THE 2015 ACM SYMPOSIUM ON THEORY OF COMPUTING, 2015, : 127 - 135
  • [5] Heavy Hitters and the Structure of Local Privacy
    Bun, Mark
    Nelson, Jelani
    Stemmer, Uri
    [J]. ACM TRANSACTIONS ON ALGORITHMS, 2019, 15 (04)
  • [6] Pedigree-ing your Big Data: Data-Driven Big Data Privacy in Distributed Environments
    Cuzzocrea, Alfredo
    Damiani, Ernesto
    [J]. 2018 18TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID), 2018, : 675 - 681
  • [7] Ding BL, 2017, ADV NEUR IN, V30
  • [8] Differential privacy: A survey of results
    Dwork, Cynthia
    [J]. THEORY AND APPLICATIONS OF MODELS OF COMPUTATION, PROCEEDINGS, 2008, 4978 : 1 - 19
  • [9] Dwork C, 2006, LECT NOTES COMPUT SC, V4004, P486
  • [10] The Algorithmic Foundations of Differential Privacy
    Dwork, Cynthia
    Roth, Aaron
    [J]. FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4): : 211 - 406