Preserving data privacy in machine learning systems

被引:37
作者
El Mestari, Soumia Zohra [1 ]
Lenzini, Gabriele [1 ]
Demirci, Huseyin [1 ]
机构
[1] Univ Luxembourg, SnT, 2 Av Univ, L-4365 Esch Sur Alzette, Luxembourg
关键词
Privacy enhancing technologies; Trustworthy machine learning; Machine learning; Differential privacy; Homomorphic encryption; Functional encryption; Secure multiparty computation; Privacy threats; FULLY HOMOMORPHIC ENCRYPTION; FUNCTIONAL ENCRYPTION; K-ANONYMITY; ATTACKS; THREATS;
D O I
10.1016/j.cose.2023.103605
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The wide adoption of Machine Learning to solve a large set of real-life problems came with the need to collect and process large volumes of data, some of which are considered personal and sensitive, raising serious concerns about data protection. Privacy-enhancing technologies (PETs) are often indicated as a solution to protect personal data and to achieve a general trustworthiness as required by current EU regulations on data protection and AI. However, an off-the-shelf application of PETs is insufficient to ensure a high-quality of data protection, which one needs to understand. This work systematically discusses the risks against data protection in modern Machine Learning systems taking the original perspective of the data owners, who are those who hold the various data sets, data models, or both, throughout the machine learning life cycle and considering the different Machine Learning architectures. It argues that the origin of the threats, the risks against the data, and the level of protection offered by PETs depend on the data processing phase, the role of the parties involved, and the architecture where the machine learning systems are deployed. By offering a framework in which to discuss privacy and confidentiality risks for data owners and by identifying and assessing privacy-preserving countermeasures for machine learning, this work could facilitate the discussion about compliance with EU regulations and directives.We discuss current challenges and research questions that are still unsolved in the field. In this respect, this paper provides researchers and developers working on machine learning with a comprehensive body of knowledge to let them advance in the science of data protection in machine learning field as well as in closely related fields such as Artificial Intelligence.
引用
收藏
页数:22
相关论文
共 245 条
[1]  
Choquette-Choo CA, 2021, Arxiv, DOI arXiv:2102.05188
[2]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[3]  
Abdalla M, 2019, LECT NOTES COMPUT SC, V11443, P128, DOI 10.1007/978-3-030-17259-6_5
[4]   Multi-Input Functional Encryption for Inner Products: Function-Hiding Realizations and Constructions Without Pairings [J].
Abdalla, Michel ;
Catalano, Dario ;
Fiore, Dario ;
Gay, Romain ;
Ursu, Bogdan .
ADVANCES IN CRYPTOLOGY - CRYPTO 2018, PT I, 2018, 10991 :597-627
[5]   Simple Functional Encryption Schemes for Inner Products [J].
Abdalla, Michel ;
Bourse, Florian ;
De Caro, Angelo ;
Pointcheval, David .
PUBLIC-KEY CRYPTOGRAPHY - PKC 2015, 2015, 9020 :733-751
[6]   FAME: Fast Attribute-based Message Encryption [J].
Agrawal, Shashank ;
Chase, Melissa .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :665-682
[7]   Fully Secure Functional Encryption for Inner Products, from Standard Assumptions [J].
Agrawal, Shweta ;
Libert, Benoit ;
Stehle, Damien .
ADVANCES IN CRYPTOLOGY (CRYPTO 2016), PT III, 2016, 9816 :333-362
[8]  
Aharoni E, 2023, Arxiv, DOI arXiv:2011.01805
[9]   Privacy-Preserving Machine Learning: Threats and Solutions [J].
Al-Rubaie, Mohammad ;
Chang, J. Morris .
IEEE SECURITY & PRIVACY, 2019, 17 (02) :49-58
[10]  
Alaa AM, 2022, PR MACH LEARN RES, P290