PCA-based membership inference attack for machine learning models

被引:0
作者
Peng C. [1 ,2 ,3 ]
Gao T. [1 ,2 ]
Liu H. [1 ]
Ding H. [3 ,4 ]
机构
[1] State Key Laboratory of Public Big Data, Guizhou University, Guiyang
[2] Institute of Cryptography and Data Security, Guizhou University, Guiyang
[3] College of Computer Science and Technology, Guizhou University, Guiyang
[4] College of Information, Guizhou University of Finance and Economics, Guiyang
来源
Tongxin Xuebao/Journal on Communications | 2022年 / 43卷 / 01期
基金
中国国家自然科学基金;
关键词
Adversarial example; Machine learning; Membership inference attack; Principal component analysis; Privacy leakage;
D O I
10.11959/j.issn.1000-436x.2022009
中图分类号
学科分类号
摘要
Aiming at the problem of restricted access failure in current black box membership inference attacks, a PCA-based membership inference attack was proposed. Firstly, in order to solve the restricted access problem of black box membership inference attacks, a fast decision membership inference attack named fast-attack was proposed. Based on the perturbation samples obtained by the distance symbol gradient, the perturbation difficulty was mapped to the distance category for membership inference. Secondly, in view of the low mobility problem of fast-attack, a PCA-based membership inference attack was proposed. Combining the algorithmic ideas based on the perturbation category in the fast-attack and the PCA technology to suppress the low-migration behavior caused by excessive reliance on the model. Finally, experiments show that fast-attack reduces the access cost while ensuring the accuracy of the attack. PCA-based attack is superior to the baseline attack under the unsupervised setting, and the migration rate of model is increased by 10% compared to fast-attack. © 2022, Editorial Board of Journal on Communications. All right reserved.
引用
收藏
页码:149 / 160
页数:11
相关论文
共 37 条
[1]  
GOODFELLOW I J, SHLENS J, SZEGEDY C., Explaining and harnessing adversarial examples, Proceedings of the 3rd International Conference on Learning Representations, pp. 33-47, (2015)
[2]  
MUNOZ-GONZALEZ L, BIGGIO B, DEMONTIS A, Et al., Towards poisoning of deep learning algorithms with back-gradient optimiza-tion, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 27-38, (2017)
[3]  
SHOKRI R, STRONATI M, SONG C Z, Et al., Membership inference attacks against machine learning models, Proceedings of 2017 IEEE Symposium on Security and Privacy, pp. 3-18, (2017)
[4]  
SALEM A, ZHANG Y, HUMBERT M, Et al., ML-leaks: model and data independent membership inference attacks and defenses on machine learning models, Proceedings of 2019 Network and Distributed System Security Symposium, pp. 243-160, (2019)
[5]  
AL-RUBAIE M, CHANG J M., Privacy-preserving machine learning: threats and solutions, IEEE Security & Privacy, 17, 2, pp. 49-58, (2019)
[6]  
MELIS L, SONG C Z, DE CRISTOFARO E, Et al., Exploiting unintended feature leakage in collaborative learning, Proceedings of 2019 IEEE Symposium on Security and Privacy, pp. 691-706, (2019)
[7]  
PYRGELIS A, TRONCOSO C, DE CRISTOFARO E., Knock knock, who's there? membership inference on aggregate location data, Proceedings of 2018 Network and Distributed System Security Symposium, pp. 199-213, (2018)
[8]  
YEOM S, GIACOMELLI I, FREDRIKSON M, Et al., Privacy risk in machine learning: analyzing the connection to overfitting, Proceedings of 2018 IEEE 31st Computer Security Foundations Symposium, pp. 268-282, (2018)
[9]  
CHOO C A C, TRAMER F, CARLINI N, Et al., Label-only membership inference attacks, (2020)
[10]  
LI Z, ZHANG Y., Membership leakage in label-only exposures, Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communi-cations Security, pp. 880-895, (2021)