When Differential Privacy Implies Syntactic Privacy

被引:3
作者
Ekenstedt, Emelie [1 ]
Ong, Lawrence [1 ]
Liu, Yucheng [1 ]
Johnson, Sarah [1 ]
Yeoh, Phee Lep [2 ]
Kliewer, Joerg [3 ]
机构
[1] Univ Newcastle, Sch Engn, Callaghan, NSW 2308, Australia
[2] Univ Sydney, Sch Elect & Informat Engn, Sydney, NSW 2006, Australia
[3] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
基金
澳大利亚研究理事会; 美国国家科学基金会;
关键词
Privacy; differential privacy; t-closeness; syntactic privacy; K-ANONYMITY; NOISE;
D O I
10.1109/TIFS.2022.3177953
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Two main privacy models for sanitising datasets are differential privacy (DP) and syntactic privacy. The former restricts individual values' impact on the output based on the dataset while the latter restructures the dataset before publication to link any record to multiple sensitive data values. Besides both providing mechanisms to sanitise data, these models are often applied independently of each other and very little is known regarding how they relate. Knowing how privacy models are related can help us develop a deeper understanding of privacy and can inform how a single privacy mechanism can fulfil multiple privacy models. In this paper, we introduce a framework that determines if the privacy mechanisms of one privacy model can also guarantee privacy for another privacy model. We apply our framework to understand the relationship between DP and a form of syntactic privacy called t-closeness. We demonstrate, for the first time, how DP and t-closeness can be interpreted in terms of each other by introducing generalisations and extensions of both models to explain the transition from one model to the other. Finally, we show how applying one mechanism to guarantee multiple privacy models increases data utility compared to applying separate mechanisms for each privacy model.
引用
收藏
页码:2110 / 2124
页数:15
相关论文
共 37 条
[1]  
Aggarwal CC, 2005, P 31 INT C VERY LARG, P901, DOI DOI 10.5555/1083592.1083696
[2]  
Alvim Mario S., 2012, Formal Aspects of Security and Trust. 8th International Workshop, FAST 2011. Revised Selected Papers, P39, DOI 10.1007/978-3-642-29420-4_3
[3]  
BACCHUS F, 1992, AAAI-92 PROCEEDINGS : TENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, P602
[4]  
Balle B, 2018, PR MACH LEARN RES, V80
[5]  
Brickell J., 2008, P 14 ACM SIGKDD INT, P70, DOI [10.1145/1401890.1401904, DOI 10.1145/1401890.1401904]
[6]  
Clifton C, 2013, I C DATA ENGIN WORKS, P88, DOI 10.1109/ICDEW.2013.6547433
[7]   Ordinal, continuous and heterogeneous k-anonymity through microaggregation [J].
Domingo-Ferrer, J ;
Torra, V .
DATA MINING AND KNOWLEDGE DISCOVERY, 2005, 11 (02) :195-212
[8]   A critique of k-anonymity and some of its enhancements [J].
Domingo-Ferrer, Josep ;
Torra, Vicenc .
ARES 2008: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON AVAILABILITY, SECURITY AND RELIABILITY, 2008, :990-+
[9]   From t-closeness to differential privacy and vice versa in data anonymization [J].
Domingo-Ferrer, Josep ;
Soria-Comas, Jordi .
KNOWLEDGE-BASED SYSTEMS, 2015, 74 :151-158
[10]  
Duchi JC, 2013, ANN ALLERTON CONF, P1592, DOI 10.1109/Allerton.2013.6736718