Benchmarking Evaluation Protocols for Classifiers Trained on Differentially Private Synthetic Data

被引:0
作者
Movahedi, Parisa [1 ]
Nieminen, Valtteri [1 ,2 ]
Perez, Ileana Montoya [1 ]
Daafane, Hiba [1 ]
Sukhwal, Dishant [1 ]
Pahikkala, Tapio [1 ]
Airola, Antti [1 ]
机构
[1] Turku Univ, Dept Comp, Turku 20014, Finland
[2] Helsinki Univ Hosp HUS, Helsinki 00290, Finland
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Protocols; Synthetic data; Data models; Privacy; Analytical models; Machine learning; Bioinformatics; Classification algorithms; Differential privacy; Generative AI; Biomedical data; classification; differential privacy; generative AI; model evaluation; synthetic data;
D O I
10.1109/ACCESS.2024.3446913
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Differentially private (DP) synthetic data has emerged as a potential solution for sharing sensitive individual-level biomedical data. DP generative models offer a promising approach for generating realistic synthetic data that aims to maintain the original data's central statistical properties while ensuring privacy by limiting the risk of disclosing sensitive information about individuals. However, the issue regarding how to assess the expected real-world prediction performance of machine learning models trained on synthetic data remains an open question. In this study, we experimentally evaluate two different model evaluation protocols for classifiers trained on synthetic data. The first protocol employs solely synthetic data for downstream model evaluation, whereas the second protocol assumes limited DP access to a private test set consisting of real data managed by a data curator. We also propose a metric for assessing how well the evaluation results of the proposed protocols match the real-world prediction performance of the models. The assessment measures both the systematic error component indicating how optimistic or pessimistic the protocol is on average and the random error component indicating the variability of the protocol's error. The results of our study suggest that employing the second protocol is advantageous, particularly in biomedical health studies where the precision of the research is of utmost importance. Our comprehensive empirical study offers new insights into the practical feasibility and usefulness of different evaluation protocols for classifiers trained on DP-synthetic data.
引用
收藏
页码:118637 / 118648
页数:12
相关论文
共 44 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Arnold C., 2020, arXiv, DOI DOI 10.48550/ARXIV.2004.07740
  • [3] Bagdasaryan E, 2019, ADV NEUR IN, V32
  • [4] Banachewicz K., 2022, The Kaggle Book: Data Analysis And Machine Learning For Competitive Data Science
  • [5] Boedihardjo M., 2022, Found. Comput. Math., V1, P1
  • [6] Boyd Kendrick, 2015, P 8 ACM WORKSH ART I, P15, DOI [10.1145/2808769.2808775, DOI 10.1145/2808769.2808775]
  • [7] Data Synthesis via Differentially Private Markov Random Fields
    Cai, Kuntai
    Lei, Xiaoyu
    Wei, Jianxin
    Xiao, Xiaokui
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2021, 14 (11): : 2190 - 2202
  • [8] Castellon R, 2023, Arxiv, DOI arXiv:2307.10430
  • [9] GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
    Chen, Dingfan
    Yu, Ning
    Zhang, Yang
    Fritz, Mario
    [J]. CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, : 343 - 362
  • [10] Donhauser K, 2024, Arxiv, DOI arXiv:2401.17823