Privacy and Fairness Analysis in the Post-Processed Differential Privacy Framework

被引:0
|
作者
Zhao, Ying [1 ]
Zhang, Kai [1 ]
Gao, Longxiang [2 ,3 ]
Chen, Jinjun [1 ]
机构
[1] Swinburne Univ Technol, Dept Comp Technol, Melbourne, Vic 3122, Australia
[2] Qilu Univ Technol, Shandong Comp Sci Ctr, Key Lab Comp Power Network & Informat Secur, Minist Educ,Shandong Acad Sci, Jinan 250316, Peoples R China
[3] Shandong Fundamental Res Ctr Comp Sci, Shandong Prov Key Lab Comp Power Internet & Serv C, Jinan 250000, Peoples R China
关键词
Privacy; Accuracy; Differential privacy; Noise; Resource management; Vectors; Three-dimensional displays; Standards; Sensitivity; Optimization methods; consistency; non-negativity; post-processing; fairness; census data privacy; COUNTS;
D O I
10.1109/TIFS.2025.3528222
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The post-processed Differential Privacy (DP) framework has been routinely adopted to preserve privacy while maintaining important invariant characteristics of datasets in data-release applications such as census data. Typical invariant characteristics include non-negative counts and total population. Subspace DP has been proposed to preserve total population while guaranteeing DP for sub-populations. Non-negativity post-processing has been identified to inherently incur fairness issues. In this work, we study privacy and unfairness (i.e., accuracy disparity) concerns in the post-processed DP framework. On one hand, we propose the post-processed DP framework with both non-negativity and accurate total population as constraints would inadvertently violate privacy guarantee desired by it. Instead, we propose the post-processed subspace DP framework to accurately define privacy guarantees against adversaries. On the other hand, we identify unfairness level is dependent on privacy budget, count sizes as well as their imbalance level via empirical analysis. Particularly concerning is severe unfairness in the setting of strict privacy budgets. We further trace unfairness back to uniform privacy budget setting over different population subgroups. To address this, we propose a varying privacy budget setting method and develop optimization approaches using ternary search and golden ratio search to identify optimal privacy budget ranges that minimize unfairness while maintaining privacy guarantees. Our extensive theoretical and empirical analysis demonstrates the effectiveness of our approaches in addressing severe unfairness issues across different privacy settings and several canonical privacy mechanisms. Using datasets of Australian Census data, Adult dataset, and delinquent children by county and household head education level, we validate both our privacy analysis framework and fairness optimization methods, showing significant reduction in accuracy disparities while maintaining strong privacy guarantees.
引用
收藏
页码:2412 / 2423
页数:12
相关论文
共 50 条
  • [1] Privacy Bargaining with Fairness: Privacy-Price Negotiation System for Applying Differential Privacy in Data Market Environments
    Jung, Kangsoo
    Park, Seog
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 1389 - 1394
  • [2] On Privacy, Accuracy, and Fairness Trade-Offs in Facial Recognition
    Zarei, Amir
    Hassanpour, Ahmad
    Raja, Kiran
    IEEE ACCESS, 2025, 13 : 26050 - 26062
  • [3] RFID system with fairness within the framework of security and privacy
    Kwak, J
    Rhee, K
    Oh, S
    Kim, S
    Won, D
    SECURITY AND PRIVACY IN AD-HOC AND SENSOR NETWORKS, 2005, 3813 : 142 - 152
  • [4] The Impact of Differential Privacy on Model Fairness in Federated Learning
    Gu, Xiuting
    Zhu, Tianqing
    Li, Jie
    Zhang, Tao
    Ren, Wei
    NETWORK AND SYSTEM SECURITY, NSS 2020, 2020, 12570 : 419 - 430
  • [5] A Programming Framework for Differential Privacy with Accuracy Concentration Bounds
    Lobo-Vesga, Elisabet
    Russo, Alejandro
    Gaboardi, Marco
    2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2020), 2020, : 411 - 428
  • [6] When Differential Privacy Implies Syntactic Privacy
    Ekenstedt, Emelie
    Ong, Lawrence
    Liu, Yucheng
    Johnson, Sarah
    Yeoh, Phee Lep
    Kliewer, Joerg
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 2110 - 2124
  • [7] Achieving Differential Privacy and Fairness in Logistic Regression
    Xu, Depeng
    Yuan, Shuhan
    Wu, Xintao
    COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2019 ), 2019, : 594 - 599
  • [8] On the impact of multi-dimensional local differential privacy on fairness
    Makhlouf, Karima
    Arcolezi, Heber H.
    Zhioua, Sami
    Ben Brahim, Ghassen
    Palamidessi, Catuscia
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (04) : 2252 - 2275
  • [9] A Statistical Framework for Differential Privacy
    Wasserman, Larry
    Zhou, Shuheng
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2010, 105 (489) : 375 - 389
  • [10] A Framework for Privacy-Preserving in IoV Using Federated Learning With Differential Privacy
    Adnan, Muhammad
    Syed, Madiha Haider
    Anjum, Adeel
    Rehman, Semeen
    IEEE ACCESS, 2025, 13 : 13507 - 13521