GuardianML: Anatomy of Privacy-Preserving Machine Learning Techniques and Frameworks

被引:0
作者
Njungle, Nges Brian [1 ]
Jahns, Eric [1 ]
Wu, Zhenqi [1 ]
Mastromauro, Luigi [1 ]
Stojkov, Milan [2 ]
Kinsy, Michel A. [1 ]
机构
[1] Arizona State Univ, STAM Ctr, Ira A Fulton Sch Engn, Tempe, AZ 85281 USA
[2] Univ Novi Sad, Fac Tech Sci, Novi Sad 21000, Serbia
关键词
Machine learning; Data models; Federated learning; Computational modeling; Surveys; Training; Protocols; Libraries; Recommender systems; Privacy; Differential privacy; federated learning; homomorphic encryption; multi-party computation; privacy preserving machine learning; trusted execution environment;
D O I
10.1109/ACCESS.2025.3557228
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning has become integral to our lives, finding applications in nearly every aspect of our daily routines. However, using personal information in machine learning applications has raised concerns about user data privacy and security. As concerns about data privacy grow, algorithms and techniques for achieving robust privacy-preserving machine learning (PPML) have become a pressing technical challenge. Privacy-preserving machine learning PPML aims to safeguard the confidentiality of both data and models and ensure that sensitive information remains protected during training and inference processes. Different techniques, protocols, libraries, and frameworks have been advanced to enable privacy-preserving machine learning, including implementation trade-offs, computational efficiency, communication overhead minimization, security guarantees, and scalability. However, choosing the proper technique, framework, and corresponding algorithmic or system parameters for a specific deployment instance can be difficult. Various techniques, protocols, libraries, and frameworks have been proposed for PPML, but choosing the right combination along with the appropriate algorithmic or system parameters for a specific deployment instance can be very difficult. In this work, we introduce GuardianML, an open-source recommendation system for selecting the correct parameters and suitable framework for specific use cases of privacy-preserving machine learning PPML. GuardianML allows users to search through a wide range of privacy-preserving machine learning PPML frameworks, techniques, protocols, libraries, and more based on a set of objectives. GuardianML filters potential frameworks based on user-defined criteria, such as the number of parties involved in multi-party computation or the need to minimize communication costs in homomorphic encryption scenarios. The system's recommendations and optimizations are formulated as a maximization problem using linear integer programming to identify the most suitable solution for various use cases. Moreover, this work thoroughly analyzes and presents seventy relevant frameworks in the system's database. Additionally, we offer an open-source repository containing practical examples and documentation for some of the frameworks.
引用
收藏
页码:61483 / 61510
页数:28
相关论文
共 112 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Adamczewski K, 2023, Arxiv, DOI arXiv:2303.04612
[3]  
Aggarwal N, 2014, 2014 International Conference on Cloud Computing and Internet of Things (CCIOT), P14, DOI 10.1109/CCIOT.2014.7062497
[4]   PrivFT: Private and Fast Text Classification With Homomorphic Encryption [J].
Al Badawi, Ahmad ;
Hoang, Louie ;
Mun, Chan Fook ;
Laine, Kim ;
Aung, Khin Mi Mi .
IEEE ACCESS, 2020, 8 :226544-226556
[5]   Privacy-Preserving Machine Learning: Threats and Solutions [J].
Al-Rubaie, Mohammad ;
Chang, J. Morris .
IEEE SECURITY & PRIVACY, 2019, 17 (02) :49-58
[6]  
[Anonymous], 2022, TFHE-rs: A Pure Rust Implementation of the TFHE Scheme for Boolean and Integer Arithmetics Over Encrypted Data
[7]  
[Anonymous], 2023, Microsoft SEAL (Release 4.1)
[8]  
[Anonymous], 2018, Paper 2018/421
[9]   Local Differential Privacy for Deep Learning [J].
Arachchige, Pathum Chamikara Mahawaga ;
Bertok, Peter ;
Khalil, Ibrahim ;
Liu, Dongxi ;
Camtepe, Seyit ;
Atiquzzaman, Mohammed .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07) :5827-5842
[10]   A Trustworthy Privacy Preserving Framework for Machine Learning in Industrial IoT Systems [J].
Arachchige, Pathum Chamikara Mahawaga ;
Bertok, Peter ;
Khalil, Ibrahim ;
Liu, Dongxi ;
Camtepe, Seyit ;
Atiquzzaman, Mohammed .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (09) :6092-6102