Secure Split Learning Against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks

被引:4
作者
Mao, Yunlong [1 ]
Xin, Zexi [1 ]
Li, Zhenyu [2 ]
Hong, Jue [3 ]
Yang, Qingyou [3 ]
Zhong, Sheng [1 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Univ Calif San Diego, San Diego, CA USA
[3] ByteDance Ltd, Beijing, Peoples R China
来源
COMPUTER SECURITY - ESORICS 2023, PT IV | 2024年 / 14347卷
基金
中国国家自然科学基金;
关键词
Privacy preservation; Inference attack; Reconstruction attack; Feature space hijacking attack; Split learning;
D O I
10.1007/978-3-031-51482-1_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of a guest and a host, which may come from different backgrounds, holding features partitioned vertically. However, SplitNN creates a new attack surface for the adversarial participant. By investigating the adversarial effects of highly threatening attacks, including property inference, data reconstruction, and feature hijacking attacks, we identify the underlying vulnerability of SplitNN. To protect SplitNN, we design a privacy-preserving tunnel for information exchange. The intuition is to perturb the propagation of knowledge in each direction with a controllable unified solution. To this end, we propose a new activation function named R3eLU, transferring private smashed data and partial loss into randomized responses. We give the first attempt to secure split learning against three threatening attacks and present a fine-grained privacy budget allocation scheme. The analysis proves that our privacy-preserving SplitNN solution provides a tight privacy budget, while the experimental results show that our solution performs better than existing solutions in most cases and achieves a good tradeoff between defense and model usability.
引用
收藏
页码:23 / 43
页数:21
相关论文
共 43 条
[11]  
Gao H., 2020, AAAI C ART INT
[12]   End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things [J].
Gao, Yansong ;
Kim, Minki ;
Abuadbba, Sharif ;
Kim, Yeonjae ;
Thapa, Chandra ;
Kim, Kyuyeon ;
Camtep, Seyit A. ;
Kim, Hyoungshick ;
Nepal, Surya .
2020 INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS 2020), 2020, :91-100
[13]  
Gawron G, 2022, Arxiv, DOI arXiv:2201.04018
[14]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[15]   Distributed learning of deep neural network over multiple agents [J].
Gupta, Otkrist ;
Raskar, Ramesh .
JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2018, 116 :1-8
[16]   The MovieLens Datasets: History and Context [J].
Harper, F. Maxwell ;
Konstan, Joseph A. .
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2016, 5 (04)
[17]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[18]   Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning [J].
Hitaj, Briland ;
Ateniese, Giuseppe ;
Perez-Cruz, Fernando .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :603-618
[19]   Data Poisoning Attacks to Deep Learning Based Recommender Systems [J].
Huang, Hai ;
Mu, Jiaming ;
Gong, Neil Zhenqiang ;
Li, Qi ;
Liu, Bin ;
Xu, Mingwei .
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
[20]  
Krizhevsky A, 2009, Technical Report