Distributed Differential Privacy via Shuffling Versus Aggregation: A Curious Study

被引:2
作者
Wei, Yu [1 ,2 ]
Jia, Jingyu [1 ,2 ]
Wu, Yuduo [1 ,2 ]
Hu, Changhui [3 ,4 ]
Dong, Changyu [5 ]
Liu, Zheli [1 ,2 ]
Chen, Xiaofeng [6 ]
Peng, Yun [5 ]
Wang, Shaowei [5 ]
机构
[1] Nankai Univ, Coll Cyber Sci, Tianjin 300350, Peoples R China
[2] Nankai Univ, Coll Comp Sci, Minist Educ, Key Lab Data & Intelligent Syst Secur, Tianjin 300350, Peoples R China
[3] Hainan Univ, Sch Cyberspace Secur, Haikou 570228, Peoples R China
[4] Hainan Univ, Sch Cryptol, Haikou 570228, Peoples R China
[5] Guangzhou Univ, Inst Artificial Intelligence, Guangzhou 511370, Peoples R China
[6] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
基金
中国国家自然科学基金; 英国工程与自然科学研究理事会;
关键词
Differential privacy; shuffle model; aggregation model; NOISE;
D O I
10.1109/TIFS.2024.3351474
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
How to achieve distributed differential privacy (DP) without a trusted central party is of great interest in both theory and practice. Recently, the shuffle model has attracted much attention. Unlike the local DP model in which the users send randomized data directly to the data collector/analyzer, in the shuffle model an intermediate untrusted shuffler is introduced to randomly permute the data, which have already been randomized by the users, before they reach the analyzer. The most appealing aspect is that while shuffling does not explicitly add more noise to the data, it can make privacy better. The privacy amplification effect in consequence means the users need to add less noise to the data than in the local DP model, but can achieve the same level of differential privacy. Thus, protocols in the shuffle model can provide better accuracy than those in the local DP model. What looks interesting to us is that the architecture of the shuffle model is similar to private aggregation, which has been studied for more than a decade. In private aggregation, locally randomized user data are aggregated by an intermediate untrusted aggregator. Thus, our question is whether aggregation also exhibits some sort of privacy amplification effect? And if so, how good is this "aggregation model" in comparison with the shuffle model. We conducted the first comparative study between the two, covering privacy amplification, functionalities, protocol accuracy, and practicality. The results as yet suggest that the new shuffle model does not have obvious advantages over the old aggregation model. On the contrary, protocols in the aggregation model outperform those in the shuffle model, sometimes significantly, in many aspects.
引用
收藏
页码:2501 / 2516
页数:16
相关论文
共 50 条
  • [21] Federated Experiment Design under Distributed Differential Privacy
    Chen, Wei-Ning
    Cormode, Graham
    Bharadwaj, Akash
    Romov, Peter
    Ozgur, Ayfer
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [22] Personalized Differential Privacy Preserving Data Aggregation for Smart Homes
    Zhang, Xin-Yuan
    Huang, Liu-Sheng
    Wang, Shao-Wei
    Zhu, Zhen-Yu
    Xu, Hong-Li
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATION AND SENSOR NETWORKS (WCSN 2016), 2016, 44 : 203 - 209
  • [23] Credit-based Differential Privacy Stochastic Model Aggregation Algorithm for Robust Federated Learning via Blockchain
    Du, Mengyao
    Zhang, Miao
    Liu, Lin
    Xu, Kai
    Yin, Quanjun
    PROCEEDINGS OF THE 52ND INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2023, 2023, : 452 - 461
  • [24] Encrypted Data Aggregation in Mobile CrowdSensing based on Differential Privacy
    Girolami, Michele
    Urselli, Emanuele
    Chessa, Stefano
    2022 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS (PERCOM WORKSHOPS), 2022,
  • [25] An Efficient Privacy Preserving Scheme for Distributed Data Aggregation in Smart Grid
    Jie Yuan
    Yan Wang
    Zhicheng Ji
    International Journal of Control, Automation and Systems, 2022, 20 : 2008 - 2020
  • [26] GANobfuscator: Mitigating Information Leakage Under GAN via Differential Privacy
    Xu, Chugui
    Ren, Ju
    Zhang, Deyu
    Zhang, Yaoxue
    Qin, Zhan
    Ren, Kui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2019, 14 (09) : 2358 - 2371
  • [27] SENSITIVITY-INDEPENDENT DIFFERENTIAL PRIVACY VIA PRIOR KNOWLEDGE REFINEMENT
    Soria-Comas, Jordi
    Domingo-Ferrer, Josep
    INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2012, 20 (06) : 855 - 876
  • [28] Privacy-preserving face attribute classification via differential privacy
    Zhang, Xiaoting
    Wang, Tao
    Ji, Junhao
    Zhang, Yushu
    Lan, Rushi
    NEUROCOMPUTING, 2025, 626
  • [29] Defeating traffic analysis via differential privacy: a case study on streaming traffic
    Xiaokuan Zhang
    Jihun Hamm
    Michael K. Reiter
    Yinqian Zhang
    International Journal of Information Security, 2022, 21 : 689 - 706
  • [30] Proving Differential Privacy via Probabilistic Couplings
    Barthe, Gilles
    Gaboardi, Marco
    Gregoire, Benjamin
    Hsu, Justin
    Strub, Pierre-Yves
    PROCEEDINGS OF THE 31ST ANNUAL ACM-IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE (LICS 2016), 2016, : 749 - 758