Enhancing Deep Learning Model Privacy Against Membership Inference Attacks Using Privacy-Preserving Oversampling

被引:0
|
作者
Subhasish Ghosh [1 ]
Amit Kr Mandal [1 ]
Agostino Cortesi [2 ]
机构
[1] SRM University AP,Department of Computer Science and Engineering
[2] Ca’ Foscari University,Department of Computer Science
关键词
Oversampling method; Deep neural networks; Membership inference attack; Differential privacy;
D O I
10.1007/s42979-025-03845-1
中图分类号
学科分类号
摘要
The overfitting of deep learning models trained using moderately imbalanced datasets is the main factor in increasing the success rate of membership inference attacks. While many oversampling methods have been designed to minimize the data imbalance, only a few defend the deep neural network models against membership inference attacks. We introduce the privacy preserving synthetic minority oversampling technique (PP-SMOTE), that applies privacy preservation mechanisms during data preprocessing rather than the model training phase. The PP-SMOTE oversampling method adds Laplace noise to generate the synthetic data points of minority classes by considering the L1 sensitivity of the dataset. The PP-SMOTE oversampling method demonstrates lower vulnerability to membership inference attacks than the DNN model trained on datasets oversampled by GAN and SVMSMOTE. The PP-SMOTE oversampling method helps retain more model accuracy and lower membership inference attack accuracy compared to the differential privacy mechanisms such as DP-SGD, and DP-GAN. Experimental results showcase that PP-SMOTE effectively mitigates membership inference attack accuracy to approximately below 0.60 while preserving high model accuracy in terms of AUC score approximately above 0.90. Additionally, the broader confidence score distribution achieved by the PP-SMOTE significantly enhances both model accuracy and mitigation of membership inference attacks (MIA). This is confirmed by the loss-epoch curve which shows stable convergence and minimal overfitting during training. Also, the higher variance in confidence scores complicates efforts of attackers to distinguish training data thereby reducing the risk of MIA.
引用
收藏
相关论文
共 50 条
  • [1] Use the Spear as a Shield: An Adversarial Example Based Privacy-Preserving Technique Against Membership Inference Attacks
    Xue, Mingfu
    Yuan, Chengxiang
    He, Can
    Wu, Yinghao
    Wu, Zhiyu
    Zhang, Yushu
    Liu, Zhe
    Liu, Weiqiang
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (01) : 153 - 169
  • [2] Privacy-Preserving Deep Learning Based Record Linkage
    Ranbaduge, Thilina
    Vatsalan, Dinusha
    Ding, Ming
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6839 - 6850
  • [3] Privacy-preserving Deep-learning Models for Fingerprint Data Using Differential Privacy
    Mohammadi, Maryam
    Sabry, Farida
    Labda, Wadha
    Malluhi, Qutaibah
    PROCEEDINGS OF THE 9TH ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS, IWSPA 2023, 2023, : 45 - 53
  • [4] Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning
    Hu, Jiahui
    Wang, Zhibo
    Shen, Yongsheng
    Lin, Bohan
    Sun, Peng
    Pang, Xiaoyi
    Liu, Jian
    Ren, Kui
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (02) : 1407 - 1422
  • [5] VPPFL: A verifiable privacy-preserving federated learning scheme against poisoning attacks
    Huang, Yuxian
    Yang, Geng
    Zhou, Hao
    Dai, Hua
    Yuan, Dong
    Yu, Shui
    COMPUTERS & SECURITY, 2024, 136
  • [6] A Pragmatic Privacy-Preserving Deep Learning Framework Satisfying Differential Privacy
    Dang T.K.
    Tran-Truong P.T.
    SN Computer Science, 5 (1)
  • [7] Privacy-preserving parametric inference for spatial autoregressive model
    Wang, Zhijian
    Song, Yunquan
    TEST, 2024, 33 (03) : 877 - 896
  • [8] Adversarial Privacy-Preserving Graph Embedding Against Inference Attack
    Li, Kaiyang
    Luo, Guangchun
    Ye, Yang
    Li, Wei
    Ji, Shihao
    Cai, Zhipeng
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (08) : 6904 - 6915
  • [9] Privacy-Preserving News Recommendation Model Learning
    Qi, Tao
    Wu, Fangzhao
    Wu, Chuhan
    Huang, Yongfeng
    Xie, Xing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1423 - 1432
  • [10] Privacy-Preserving Classification on Deep Learning with Exponential Mechanism
    Ju, Quan
    Xia, Rongqing
    Li, Shuhong
    Zhang, Xiaojian
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)