Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning

被引:2
作者
Abbasi Tadi, Ali [1 ]
Dayal, Saroj [1 ]
Alhadidi, Dima [1 ]
Mohammed, Noman [2 ]
机构
[1] Univ Windsor, Sch Comp Sci, Windsor, ON N9B 3P4, Canada
[2] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
federated learning; membership inference attack; privacy; machine learning;
D O I
10.3390/info14110620
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a model without sharing or centralizing their data, offering privacy advantages. However, when private datasets are used in federated learning and model access is granted, the risk of membership inference attacks emerges, potentially compromising sensitive data. To address this, effective defenses in a federated learning environment must be developed without compromising the utility of the target model. This study empirically investigates and compares membership inference attack methodologies in both federated and centralized learning environments, utilizing diverse optimizers and assessing attacks with and without defenses on image and tabular datasets. The findings demonstrate that a combination of knowledge distillation and conventional mitigation techniques (such as Gaussian dropout, Gaussian noise, and activity regularization) significantly mitigates the risk of information leakage in both federated and centralized settings.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] Membership Inference Attacks Against Machine Learning Models
    Shokri, Reza
    Stronati, Marco
    Song, Congzheng
    Shmatikov, Vitaly
    2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 3 - 18
  • [32] Rethinking Membership Inference Attacks Against Transfer Learning
    Wu, Cong
    Chen, Jing
    Fang, Qianru
    He, Kun
    Zhao, Ziming
    Ren, Hao
    Xu, Guowen
    Liu, Yang
    Xiang, Yang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6441 - 6454
  • [33] Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone
    Messaoud, Aghiles Ait
    Ben Mokhtar, Sonia
    Nitu, Vlad
    Schiavoni, Valerio
    PROCEEDINGS OF THE TWENTY-THIRD ACM/IFIP INTERNATIONAL MIDDLEWARE CONFERENCE, MIDDLEWARE 2022, 2022, : 335 - 348
  • [34] FL-TIA: Novel Time Inference Attacks on Federated Learning
    Sandeepa, Chamara
    Siniarski, Bartlomiej
    Wang, Shen
    Liyanage, Madhusanka
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 173 - 180
  • [35] Hybrid Federated and Centralized Learning
    Elbir, Ahmet M.
    Coleri, Sinem
    Mishra, Kumar Vijay
    29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 1541 - 1545
  • [36] Defenses to Membership Inference Attacks: A Survey
    Hu, Li
    Yan, Anli
    Yan, Hongyang
    Li, Jin
    Huang, Teng
    Zhang, Yingying
    Dong, Changyu
    Yang, Chunsheng
    ACM COMPUTING SURVEYS, 2024, 56 (04)
  • [37] Membership Inference Vulnerabilities in Peer-to-Peer Federated Learning
    Luqman, Alka
    Chattopadhyay, Anupam
    Lam, Kwok Yan
    PROCEEDINGS OF THE INAUGURAL ASIACCS 2023 WORKSHOP ON SECURE AND TRUSTWORTHY DEEP LEARNING SYSTEMS, SECTL, 2022,
  • [38] Comparative Analysis of Federated and Centralized Learning Systems in Predicting Cellular Downlink Throughput Using CNN
    Nugroho, Kukuh
    Hendrawan
    Iskandar
    IEEE ACCESS, 2025, 13 : 22745 - 22763
  • [39] MemberShield: A framework for federated learning with membership privacy
    Ahmed, Faisal
    Sanchez, David
    Haddi, Zouhair
    Domingo-Ferrer, Josep
    NEURAL NETWORKS, 2025, 181
  • [40] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722