Privacy-preserving inference resistant to model extraction attacks

被引:1
作者
Byun, Junyoung [1 ]
Choi, Yujin [2 ]
Lee, Jaewook [2 ]
Park, Saerom [3 ]
机构
[1] Chung Ang Univ, 84 Heukseok Ro, Seoul 06974, South Korea
[2] Seoul Natl Univ, 1 Gwanak Ro, Seoul 08826, South Korea
[3] Ulsan Natl Inst Sci & Technol, 50 UNIST gil, Ulsan 44919, South Korea
基金
新加坡国家研究基金会;
关键词
Homomorphic encryption; Secure computation; Model extraction attacks; Model extraction defenses;
D O I
10.1016/j.eswa.2024.124830
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Privacy-Preserving Deep Learning (PPDL) has been successfully applied in the inference phase to preserve the privacy of input data. However, PPDL models are vulnerable to model extraction attacks, in which an adversary attempts to steal the trained model itself. In this paper, we propose a new defense method against model extraction attacks that is specifically designed for PPDL based on secure multi-party computations and homomorphic encryption. The proposed method confounds inference queries for out-of-distribution data by using a fake network with the target network while optimizing computational efficiency for PPDL environments. Furthermore, we introduce Wasserstein regularization to ensure that the fake network's output distribution is indistinguishable from the target network, thwarting adversaries' attempts to discern any discrepancies within the PPDL framework. The experimental results demonstrate that our defense method attains a good accuracy-security trade-off and is effective against a wide range of attacks, including adaptive attacks and transfer attacks. Our work contributes to the field of PPDL by providing an extended perspective to improve the algorithm's security and reliability beyond privacy.
引用
收藏
页数:13
相关论文
共 79 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   Towards the AlexNet Moment for Homomorphic Encryption: HCNN, the First Homomorphic CNN on Encrypted Data With GPUs [J].
Al Badawi, Ahmad ;
Jin, Chao ;
Lin, Jie ;
Mun, Chan Fook ;
Jie, Sim Jun ;
Tan, Benjamin Hong Meng ;
Nan, Xiao ;
Aung, Khin Mi Mi ;
Chandrasekhar, Vijay Ramaseshan .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2021, 9 (03) :1330-1343
[3]   Privacy-Preserving Machine Learning: Threats and Solutions [J].
Al-Rubaie, Mohammad ;
Chang, J. Morris .
IEEE SECURITY & PRIVACY, 2019, 17 (02) :49-58
[4]  
Barbalau Antonio, 2020, Advances in Neural Information Processing Systems, V33, P20120
[5]  
Benaissa A, 2021, Arxiv, DOI arXiv:2104.03152
[6]  
Biggio Battista, 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P387, DOI 10.1007/978-3-642-40994-3_25
[7]  
Boemer Fabian, 2020, ARES 2020: Proceedings of the 15th International Conference on Availability, Reliability and Security, DOI 10.1145/3407023.3407045
[8]   nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data [J].
Boemer, Fabian ;
Costache, Anamaria ;
Cammarota, Rosario ;
Wierzynski, Casimir .
PROCEEDINGS OF THE 7TH ACM WORKSHOP ON ENCRYPTED COMPUTING & APPLIED HOMOMORPHIC CRYPTOGRAPHY (WAHC'19), 2019, :45-56
[9]   Fast Homomorphic Evaluation of Deep Discretized Neural Networks [J].
Bourse, Florian ;
Minelli, Michele ;
Minihold, Matthias ;
Paillier, Pascal .
ADVANCES IN CRYPTOLOGY - CRYPTO 2018, PT III, 2018, 10993 :483-512
[10]  
Brakerski Zvika, 2014, ACM Transactions on Computation Theory, V6, DOI 10.1145/2633600