Quantifying Membership Privacy via Information Leakage

被引:19
作者
Saeidian, Sara [1 ]
Cervia, Giulia [2 ,3 ]
Oechtering, Tobias J. [1 ]
Skoglund, Mikael [1 ]
机构
[1] KTH Royal Inst Technol, Div Informat Sci & Engn, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
[2] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
[3] Univ Lille, Ctr Digital Syst, IMT Lille Douai, Inst Mines Telecom, F-59000 Lille, France
关键词
Privacy; Differential privacy; Measurement; Training; Machine learning; Data models; Upper bound; Privacy-preserving machine learning; membership inference; maximal leakage; log-concave probability density;
D O I
10.1109/TIFS.2021.3073804
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Machine learning models are known to memorize the unique properties of individual data points in a training set. This memorization capability can be exploited by several types of attacks to infer information about the training data, most notably, membership inference attacks. In this paper, we propose an approach based on information leakage for guaranteeing membership privacy. Specifically, we propose to use a conditional form of the notion of maximal leakage to quantify the information leaking about individual data entries in a dataset, i.e., the entrywise information leakage. We apply our privacy analysis to the Private Aggregation of Teacher Ensembles (PATE) framework for privacy-preserving classification of sensitive data and prove that the entrywise information leakage of its aggregation mechanism is Schur-concave when the injected noise has a log-concave probability density. The Schur-concavity of this leakage implies that increased consensus among teachers in labeling a query reduces its associated privacy cost. Finally, we derive upper bounds on the entrywise information leakage when the aggregation mechanism uses Laplace distributed noise.
引用
收藏
页码:3096 / 3108
页数:13
相关论文
共 50 条
  • [31] Efficient Federated Learning With Enhanced Privacy via Lottery Ticket Pruning in Edge Computing
    Shi, Yifan
    Wei, Kang
    Shen, Li
    Li, Jun
    Wang, Xueqian
    Yuan, Bo
    Guo, Song
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 9946 - 9958
  • [32] Minimum Gaussian Noise Variance of Federated Learning in the Presence of Mutual Information Based Differential Privacy
    He, Hua
    He, Zheng
    [J]. IEEE ACCESS, 2023, 11 : 111212 - 111225
  • [33] Information Leakage Measures for Imperfect Statistical Information: Application to Non-Bayesian Framework
    Sakib, Shahnewaz Karim
    Amariucai, George T.
    Guan, Yong
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 1065 - 1080
  • [34] Privacy Leakage in GAN Enabled Load Profile Synthesis
    Huang, Jiaqi
    Wu, Chenye
    [J]. 2022 IEEE SUSTAINABLE POWER AND ENERGY CONFERENCE (ISPEC), 2022,
  • [35] Information Measures in Statistical Privacy and Data Processing Applications
    Lin, Bing-Rong
    Kifer, Daniel
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2015, 9 (04) : 1 - 29
  • [36] On the Privacy Leakage of Coded Caching
    Wang, Yu
    Abouzeid, Alhussein A.
    [J]. ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [37] Information Leakage in Embedding Models
    Song, Congzheng
    Raghunathan, Ananth
    [J]. CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, : 377 - 390
  • [38] Quantifying privacy in multiagent planning
    van der Krogt, Roman
    [J]. MULTIAGENT AND GRID SYSTEMS, 2009, 5 (04) : 451 - 469
  • [39] On the Leakage of Personally Identifiable Information Via Online Social Networks
    Krishnamurthy, Balachander
    Wills, Craig E.
    [J]. 2ND ACM SIGCOMM WORKSHOP ON ONLINE SOCIAL NETWORKS (WOSN 09), 2009, : 7 - 12
  • [40] One Parameter Defense-Defending Against Data Inference Attacks via Differential Privacy
    Ye, Dayong
    Shen, Sheng
    Zhu, Tianqing
    Liu, Bo
    Zhou, Wanlei
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1466 - 1480