MIA-BAD: An Approach for Enhancing Membership Inference Attack and its Mitigation with Federated Learning

被引:0
作者
Banerjee, Soumya [1 ]
Roy, Sandip [1 ]
Ahamed, Sayyed Farid [1 ]
Quinn, Devin [2 ]
Vucovich, Marc [2 ]
Nandakumar, Dhruv [2 ]
Choi, Kevin [2 ]
Rahman, Abdul [2 ]
Bowen, Edward [2 ]
Shetty, Sachin [1 ]
机构
[1] Old Dominion Univ, Virginia Modeling Anal & Simulat Ctr, Norfolk, VA 23529 USA
[2] Deloitte & Touche LLP, London, England
来源
2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC | 2024年
关键词
Federated Learning; Membership Inference Attack; Privacy; Security; PRIVACY;
D O I
10.1109/CNC59896.2024.10556313
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The membership inference attack (MIA) is a popular paradigm for compromising the privacy of a machine learning (ML) model. MIA exploits the natural inclination of ML models to overfit upon the training data. MIAs are trained to distinguish between training and testing prediction confidence to infer membership information. Federated Learning (FL) is a privacy-preserving ML paradigm that enables multiple clients to train a unified model without disclosing their private data. In this paper, we propose an enhanced Membership Inference Attack with the Batch-wise generated Attack Dataset (MIA-BAD), a modification to the MIA approach. We investigate that the MIA is more accurate when the attack dataset is generated batch-wise. This quantitatively decreases the attack dataset while qualitatively improving it. We show how training an ML model through FL, has some distinct advantages and investigate how the threat introduced with the proposed MIA-BAD approach can be mitigated with FL approaches. Finally, we demonstrate the qualitative effects of the proposed MIA-BAD methodology by conducting extensive experiments with various target datasets, variable numbers of federated clients, and training batch sizes.
引用
收藏
页码:635 / 640
页数:6
相关论文
共 24 条
[1]   Federated Learning for Privacy Preservation in Smart Healthcare Systems: A Comprehensive Survey [J].
Ali, Mansoor ;
Naeem, Faisal ;
Tariq, Muhammad ;
Kaddoum, Georges .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (02) :778-789
[2]  
McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963
[3]  
Brendan McMahan H., CORR
[4]   De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks [J].
Chen, Jian ;
Zhang, Xuxin ;
Zhang, Rui ;
Wang, Chen ;
Liu, Ling .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 (16) :3412-3425
[5]   A systematic review of federated learning applications for biomedical data [J].
Crowson, Matthew G. ;
Moukheiber, Dana ;
Arevalo, Aldo Robles ;
Lam, Barbara D. ;
Mantena, Sreekar ;
Rana, Aakanksha ;
Goss, Deborah ;
Bates, David W. ;
Celi, Leo Anthony .
PLOS DIGITAL HEALTH, 2022, 1 (05)
[6]   Ensemble methods in machine learning [J].
Dietterich, TG .
MULTIPLE CLASSIFIER SYSTEMS, 2000, 1857 :1-15
[7]  
Geyer Robin C., DIFFERENTIALLY PRIVA
[8]   Source Inference Attacks in Federated Learning [J].
Hu, Hongsheng ;
Salcic, Zoran ;
Sun, Lichao ;
Dobbie, Gillian ;
Zhang, Xuyun .
2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, :1102-1107
[9]  
Krizhevsky Alex., 2009, Technical Report
[10]  
LeCun Y., 1998, MNIST DATABASE HANDW