Enhancing Deep Learning Model Privacy Against Membership Inference Attacks Using Privacy-Preserving Oversampling

被引:0
作者
Subhasish Ghosh [1 ]
Amit Kr Mandal [1 ]
Agostino Cortesi [2 ]
机构
[1] Department of Computer Science and Engineering, SRM University AP, Andhra Pradesh, Amaravati
[2] Department of Computer Science, Ca’ Foscari University, Via Turino 155, Venice
关键词
Deep neural networks; Differential privacy; Membership inference attack; Oversampling method;
D O I
10.1007/s42979-025-03845-1
中图分类号
学科分类号
摘要
The overfitting of deep learning models trained using moderately imbalanced datasets is the main factor in increasing the success rate of membership inference attacks. While many oversampling methods have been designed to minimize the data imbalance, only a few defend the deep neural network models against membership inference attacks. We introduce the privacy preserving synthetic minority oversampling technique (PP-SMOTE), that applies privacy preservation mechanisms during data preprocessing rather than the model training phase. The PP-SMOTE oversampling method adds Laplace noise to generate the synthetic data points of minority classes by considering the L1 sensitivity of the dataset. The PP-SMOTE oversampling method demonstrates lower vulnerability to membership inference attacks than the DNN model trained on datasets oversampled by GAN and SVMSMOTE. The PP-SMOTE oversampling method helps retain more model accuracy and lower membership inference attack accuracy compared to the differential privacy mechanisms such as DP-SGD, and DP-GAN. Experimental results showcase that PP-SMOTE effectively mitigates membership inference attack accuracy to approximately below 0.60 while preserving high model accuracy in terms of AUC score approximately above 0.90. Additionally, the broader confidence score distribution achieved by the PP-SMOTE significantly enhances both model accuracy and mitigation of membership inference attacks (MIA). This is confirmed by the loss-epoch curve which shows stable convergence and minimal overfitting during training. Also, the higher variance in confidence scores complicates efforts of attackers to distinguish training data thereby reducing the risk of MIA. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2025.
引用
收藏
相关论文
共 50 条
  • [41] Privacy-Preserving Cost-Sensitive Learning
    Yang, Yi
    Huang, Shuai
    Huang, Wei
    Chang, Xiangyu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (05) : 2105 - 2116
  • [42] Membership inference attacks against transfer learning for generalized model
    Chen J.
    Shangguan W.
    Zhang J.
    Zheng H.
    Zheng Y.
    Zhang X.-H.
    Tongxin Xuebao/Journal on Communications, 2021, 42 (10): : 197 - 210
  • [43] Efficient privacy-preserving classification construction model with differential privacy technology
    Lin Zhang
    Yan Liu
    Ruchuan Wang
    Xiong Fu
    Qiaomin Lin
    Journal of Systems Engineering and Electronics, 2017, 28 (01) : 170 - 178
  • [44] Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks
    Sana Ben Hamida
    Hichem Mrabet
    Faten Chaieb
    Abderrazak Jemai
    Multimedia Tools and Applications, 2024, 83 : 44455 - 44484
  • [45] Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks
    Ben Hamida, Sana
    Mrabet, Hichem
    Chaieb, Faten
    Jemai, Abderrazak
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 44455 - 44484
  • [46] Enhancing Privacy of Spatiotemporal Federated Learning Against Gradient Inversion Attacks
    Zheng, Lele
    Cao, Tang
    Jiang, Renhe
    Taura, Kenjiro
    Shen, Yulong
    Li, Sheng
    Yoshikawa, Masatoshi
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, PT I, DASFAA 2024, 2024, 14850 : 457 - 473
  • [47] Local Differential Privacy Based Membership-Privacy-Preserving Federated Learning for Deep-Learning-Driven Remote Sensing
    Zhang, Zheng
    Ma, Xindi
    Ma, Jianfeng
    REMOTE SENSING, 2023, 15 (20)
  • [48] Efficient privacy-preserving classification construction model with differential privacy technology
    Zhang, Lin
    Liu, Yan
    Wang, Ruchuan
    Fu, Xiong
    Lin, Qiaomin
    JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, 2017, 28 (01) : 170 - 178
  • [49] Privacy-Preserving Localization for Underwater Acoustic Sensor Networks: A Differential Privacy-Based Deep Learning Approach
    Yan, Jing
    Zheng, Yuhan
    Yang, Xian
    Chen, Cailian
    Guan, Xinping
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 737 - 752
  • [50] Privacy-Preserving DNN Model Authorization against Model Theft and Feature Leakage
    Li, Qiushi
    Ren, Ju
    Zhou, Yuezhi
    Zhang, Yaoxue
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5633 - 5638