GANobfuscator: Mitigating Information Leakage Under GAN via Differential Privacy

被引:126
|
作者
Xu, Chugui [1 ]
Ren, Ju [1 ]
Zhang, Deyu [1 ]
Zhang, Yaoxue [1 ]
Qin, Zhan [2 ]
Ren, Kui [2 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Hunan, Peoples R China
[2] Zhejiang Univ, Inst Cyberspace Res, Hangzhou 310058, Zhejiang, Peoples R China
基金
美国国家科学基金会;
关键词
Information leakage; generative adversarial network; deep learning; differential privacy; NOISE;
D O I
10.1109/TIFS.2019.2897874
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
By learning generative models of semantic-rich data distributions from samples, generative adversarial network (GAN) has recently attracted intensive research interests due to its excellent empirical performance as a generative model. The model is used to estimate the underlying distribution of a dataset and randomly generate realistic samples according to their estimated distribution. However, GANs can easily remember training samples due to the high model complexity of deep networks. When GANs are applied to private or sensitive data, the concentration of distribution may divulge some critical information. It consequently requires new technological advances to mitigate the information leakage under GANs. To address this issue, we propose GANobfuscator, a differentially private GAN, which can achieve differential privacy under GANs by adding carefully designed noise to gradients during the learning procedure. With GANobfuscator, analysts are able to generate an unlimited amount of synthetic data for arbitrary analysis tasks without disclosing the privacy of training data. Moreover, we theoretically prove that GANobfuscator can provide strict privacy guarantee with differential privacy. In addition, we develop a gradient-pruning strategy for GANobfuscator to improve the scalability and stability of data training. Through extensive experimental evaluation on benchmark datasets, we demonstrate that GANobfuscator can produce high-quality generated data and retain desirable utility under practical privacy budgets.
引用
收藏
页码:2358 / 2371
页数:14
相关论文
共 50 条
  • [11] Unifying Privacy Measures via Maximal (α, β)-Leakage (MαbeL)
    Gilani, Atefeh
    Kurri, Gowtham R.
    Kosut, Oliver
    Sankar, Lalitha
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2024, 70 (06) : 4368 - 4395
  • [12] Quantum Differential Privacy: An Information Theory Perspective
    Hirche, Christoph
    Rouze, Cambyse
    Franca, Daniel Stilck
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2023, 69 (09) : 5771 - 5787
  • [13] Evaluating Differential Privacy on Correlated Datasets Using Pointwise Maximal Leakage
    Saeidian, Sara
    Oechtering, Tobias J.
    Skoglund, Mikael
    PRIVACY TECHNOLOGIES AND POLICY, APF 2024, 2024, 14831 : 73 - 86
  • [14] Anti-Leakage Method of Sensitive Information of Network Documents Based on Differential Privacy Model
    Su, Shuhui
    Luo, Yonghan
    Li, Tao
    Chen, Qi
    Liang, Juntao
    SECURITY AND PRIVACY, 2025, 8 (01):
  • [15] Differential Privacy for Information Retrieval
    Yang, Grace Hui
    Zhang, Sicong
    ICTIR'17: PROCEEDINGS OF THE 2017 ACM SIGIR INTERNATIONAL CONFERENCE THEORY OF INFORMATION RETRIEVAL, 2017, : 325 - 326
  • [16] GAN-Based Differential Private Image Privacy Protection Framework for the Internet of Multimedia Things
    Yu, Jinao
    Xue, Hanyu
    Liu, Bo
    Wang, Yu
    Zhu, Shibing
    Ding, Ming
    SENSORS, 2021, 21 (01) : 1 - 21
  • [17] Differential Privacy via Distributionally Robust Optimization
    Selvi, Aras
    Liu, Huikang
    Wiesemann, Wolfram
    OPERATIONS RESEARCH, 2025,
  • [18] An information theoretic approach to post randomization methods under differential privacy
    Fadhel Ayed
    Marco Battiston
    Federico Camerlenghi
    Statistics and Computing, 2020, 30 : 1347 - 1361
  • [19] An information theoretic approach to post randomization methods under differential privacy
    Ayed, Fadhel
    Battiston, Marco
    Camerlenghi, Federico
    STATISTICS AND COMPUTING, 2020, 30 (05) : 1347 - 1361
  • [20] Robust Privacy-Utility Tradeoffs Under Differential Privacy and Hamming Distortion
    Kalantari, Kousha
    Sankar, Lalitha
    Sarwate, Anand D.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2018, 13 (11) : 2816 - 2830