Advancements and challenges in privacy-preserving split learning: experimental findings and future directions

被引:0
作者
Alhindi, Afnan [1 ]
Al-Ahmadi, Saad [1 ]
Ismail, Mohamed Maher Ben [1 ]
机构
[1] King Saud Univ, Coll Comp & Informat Sci, Comp Sci Dept, Riyadh 11543, Saudi Arabia
关键词
Data privacy; Distributed collaborative machine learning; Privacy-preserving split learning; Split learning; COLLABORATIVE INFERENCE; ARCHITECTURE;
D O I
10.1007/s10207-025-01045-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine Learning (ML) has been extensively applied with remarkable success in various fields. However, training ML models using personal and/or private data has inadvertently revealed sensitive information and caused serious privacy risks. Consequently, Distributed Collaborative Machine Learning (DCML) has emerged as a promising approach to address such risks. Typically, DCML enables multiple entities to collaborate for training ML models with no disclosure of sensitive information. One of the most recent DCML techniques, namely Split Learning (SL), partitions the neural networks into a client and a server sub-network. Specifically, SL enables different entities to collaboratively train ML models and transmit the extracted features instead of sensitive raw data. Although SL prevents sending raw data to the server, the learned features may still reveal sensitive users' information. This limitation has promoted Privacy-Preserving Split Learning (PPSL) which focuses on reinforcing data privacy within SL frameworks. This article represents a comprehensive survey on PPSL studies. It depicts relevant works published between 2018 and 2024. Moreover, the main objective of this research consists in identifying the key contributions of the state-of-the-art PPSL methods, in addition to investigating and comparing their performances. Accordingly, this survey supports the scientific community's effort to bridge the research gaps and address challenges relevant to PPSL in resource-constrained environments. Furthermore, the findings of this survey highlight the importance of developing adaptive privacy protection techniques to brace future advancements in the field and strike a balance between data privacy and utility in the context of privacy-preserving split learning.
引用
收藏
页数:24
相关论文
共 126 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Abuadbba S, 2020, Arxiv, DOI arXiv:2003.12365
[3]  
academictorrents, Extended yale B dataset
[4]  
ailab.criteo, Criteo dataset
[5]   Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach [J].
Alhindi, Afnan ;
Al-Ahmadi, Saad ;
Ben Ismail, Mohamed Maher .
IEEE ACCESS, 2025, 13 :10094-10110
[6]  
Osia SA, 2018, Arxiv, DOI arXiv:1802.03151
[7]  
Alnasser Walaa, 2022, 2022 4th International Conference on Data Intelligence and Security (ICDIS), P160, DOI 10.1109/ICDIS55630.2022.00032
[8]  
[Anonymous], The CIFAR dataset
[9]   Local Differential Privacy for Deep Learning [J].
Arachchige, Pathum Chamikara Mahawaga ;
Bertok, Peter ;
Khalil, Ibrahim ;
Liu, Dongxi ;
Camtepe, Seyit ;
Atiquzzaman, Mohammed .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07) :5827-5842
[10]   Improving the Communication and Computation Efficiency of Split Learning for IoT Applications [J].
Ayad, Ahmad ;
Renner, Melvin ;
Schmeink, Anke .
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,