GAN-based data reconstruction attacks in split learning

被引:0
作者
Zeng, Bo [1 ]
Luo, Sida [1 ]
Yu, Fangchao [1 ]
Yang, Geying [1 ]
Zhao, Kai [1 ]
Wang, Lina [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Distributed privacy-preserving machine; learning; Split learning; Data reconstruction attacks; Model inversion; Generative adversarial networks;
D O I
10.1016/j.neunet.2025.107150
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the distinctive distributed privacy-preserving architecture, split learning has found widespread application in scenarios where computational resources on the client side are limited. Unlike clients in federated learning retaining the whole model, split learning partitions the model into two segments situated separately on the server and client ends, thereby preventing direct access to the complete model structure by either party and fortifying its resilience against attacks. However, existing studies have demonstrated that even with access restricted to partial model outputs, split learning remains susceptible to data reconstruction attacks. This vulnerability persists despite prior research predominantly relying on stringent assumptions and the attacker being the server with the ability to access global information. Building upon this understanding, we devise GAN-based data reconstruction attacks within the U-shaped split learning framework, meticulously examining and confirming the feasibility of attacks initiated from both server and client sides, along with the underlying assumptions. Specifically, for attacks originating from the server, we propose the Model Approximation E stimation Reconstruction Attack (MAERA) to mitigate the requisite prior assumptions, and we also introduce the Distillation-based Client-side Reconstruction Attack (DCRA) to execute data reconstructions from the client for the first time. Experimental results illustrate the effectiveness and the robustness of the proposed frameworks in launching attacks across various datasets. In particular, MAERA necessitates merely 1% of the test set samples and 1% of the private data samples from the CIFAR100 dataset to unleash effective attacks, while DCRA adeptly expropriates models from clients and yields more pronounced reconstruction effects on target class samples during the process of inferring data distribution characteristics, in contrast to conventional Maximum A Posteriori (MAP) estimation algorithms.
引用
收藏
页数:15
相关论文
共 54 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
[Anonymous], 2009, Technical report
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]   Knowledge-Enriched Distributional Model Inversion Attacks [J].
Chen, Si ;
Kahla, Mostafa ;
Jia, Ruoxi ;
Qi, Guo-Jun .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :16158-16167
[5]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[6]  
Dong X, 2022, Arxiv, DOI [arXiv:2107.06304, 10.48550/ARXIV.2107.06304]
[7]   A Stealthy Inference Attack on Split Learning with a Split-Fuse Defensive Measure [J].
Dougherty, Sean ;
Kumar, Abhinav ;
Hou, Jie ;
Tourani, Reza ;
Tabakhi, Atena M. .
2023 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS, 2023,
[8]   UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning [J].
Erdogan, Ege ;
Kupcu, Alptekin ;
Cicek, A. Ercument .
PROCEEDINGS OF THE 21ST WORKSHOP ON PRIVACY IN THE ELECTRONIC SOCIETY, WPES 2022, 2022, :115-124
[9]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[10]  
Fu J., 2023, NDSS