Security and Privacy of Machine Learning Models: A Survey

被引:0
|
作者
Ji S.-L. [1 ]
Du T.-Y. [1 ]
Li J.-F. [1 ]
Shen C. [2 ]
Li B. [3 ]
机构
[1] Institute of Cyberspace Research and College of Computer Science and Technology, Zhejiang University, Hangzhou
[2] Ministry of Education Key Laboratory for Intelligent Networks and Network Security and Faculty of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an
[3] Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana-Champaign, 61822, IL
来源
Ruan Jian Xue Bao/Journal of Software | 2021年 / 32卷 / 01期
基金
中国国家自然科学基金; 国家重点研发计划; 浙江省自然科学基金;
关键词
Adversarial example; Artificial intelligence security; Machine learning; Model privacy; Poisoning attack;
D O I
10.13328/j.cnki.jos.006131
中图分类号
学科分类号
摘要
In the era of big data, breakthroughs in theories and technologies of deep learning, reinforcement learning, and distributed learning have provided strong support for machine learning at the data and the algorithm level, as well as have promoted the development of scale and industrialization of machine learning. However, though machine learning models have excellent performance in many real-world applications, they still suffer many security and privacy threats at the data, model, and application levels, which could be characterized by diversity, concealment, and dynamic evolution. The security and privacy issues of machine learning have attracted extensive attention from academia and industry. A large number of researchers have conducted in-depth research on the security and privacy issues of models from the perspective of attack and defense, and proposed a series of attack and defense methods. In this survey, the security and privacy issues of machine learning are reviewed, existing research work is systematically and scientifically summarized, and the advantages and disadvantages of current research are clarified. Finally, the current challenges and future research directions of machine learning model security and privacy research are explored, aiming to provide guidance for follow-up researchers to further promote the development and application of machine learning model security and privacy research. © Copyright 2021, Institute of Software, the Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:41 / 67
页数:26
相关论文
共 194 条
  • [1] Song C, Ristenpart T, Shmatikov V., Machine learning models that remember too much, Proc. of the 2017 ACM SIGSAC Conf. on Computer and Communications Security, pp. 587-601, (2017)
  • [2] Tramer F, Zhang F, Juels A, Et al., Stealing machine learning models via prediction apis, Proc. of the 25th {USENIX} Security Symp. ({USENIX} Security 2016), pp. 601-618, (2016)
  • [3] Shen S, Tople S, Saxena P., A Uror: Defending against poisoning attacks in collaborative deep learning systems, Proc. of the 32nd Annual Conf. on Computer Security Applications, pp. 508-519, (2016)
  • [4] Nelson B, Barreno M, Chi FJ, Et al., Exploiting machine learning to subvert your spam filter, LEET, 8, pp. 1-9, (2008)
  • [5] Jagielski M, Oprea A, Biggio B, Et al., Manipulating machine learning: Poisoning attacks and countermeasures for regression learning, Proc. of the 2018 IEEE Symp. on Security and Privacy (SP), pp. 19-35, (2018)
  • [6] Nelson B, Biggio B, Laskov P., Understanding the risk factors of learning in adversarial environments, AISec, 11, pp. 87-92, (2011)
  • [7] Barreno M, Nelson B, Sears R, Et al., Can machine learning be secure?, Proc. of the 2006 ACM Symp. on Information, Computer and Communications Security, pp. 16-25, (2006)
  • [8] Newsome J, Karp B, Song D., Paragraph: Thwarting signature learning by training maliciously, Proc. of the Int'l Workshop on Recent Advances in Intrusion Detection, pp. 81-105, (2006)
  • [9] Rubinstein BI, Nelson B, Huang L, Et al., Antidote: Understanding and defending against poisoning of anomaly detectors, Proc. of the 9th ACM SIGCOMM Conf. on Internet Measurement, pp. 1-14, (2009)
  • [10] Xiao H, Biggio B, Brown G, Et al., Is feature selection secure against training data poisoning?, Proc. of the Int'l Conf. on Machine Learning, pp. 1689-1698, (2015)