Progress and Future Challenges of Security Attacks and Defense Mechanisms in Machine Learning

被引:0
作者
Li X.-J. [1 ,2 ]
Wu G.-W. [1 ,2 ]
Yao L. [1 ,3 ]
Zhang W.-Z. [4 ]
Zhang B. [3 ]
机构
[1] School of Software Technology, Dalian University of Technology, Dalian
[2] Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian University of Technology, Dalian
[3] Cyberspace Security Research Center, Peng Cheng Laboratory, Shenzhen
[4] School of Computer Science and Technology, Harbin Institute of Technology, Harbin
来源
Wu, Guo-Wei (wgwdut@dlut.edu.cn) | 1600年 / Chinese Academy of Sciences卷 / 32期
基金
中国国家自然科学基金;
关键词
Attack classification; Defense mechanism; Machine learning; Security and privacy;
D O I
10.13328/j.cnki.jos.006147
中图分类号
学科分类号
摘要
Machine learning applications span all areas of artificial intelligence, but due to storage and transmission security issues and the flaws of machine learning algorithms themselves, machine learning faces a variety of security- and privacy-oriented attacks. This survey classifies the security and privacy attacks based on the location and timing of attacks in machine learning, and analyzes the causes and attack methods of data poisoning attacks, adversary attacks, data stealing attacks, and querying attacks. Furthermore, the existing security defense mechanisms are summarized. Finally, a perspective of future work and challenges in this research area are discussed. © Copyright 2021, Institute of Software, the Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:406 / 423
页数:17
相关论文
共 97 条
  • [1] Silver D, Huang A, Maddison CJ, Et al., Mastering the game of Go with deep neural networks and tree search, Nature, 529, 7587, pp. 484-489, (2016)
  • [2] Dalvi N, Domingos P, Sanghai S, Verma D, Et al., Adversarial classification, Proc. of the 10th ACM SIGKDD Int'l Conf. on Knowledge Discovery and Data Mining, pp. 99-108, (2004)
  • [3] Lowd D, Meek C., Adversarial learning, Proc. of the 11th ACM Sigkdd Int'l Conf. on Knowledge Discovery in Data Mining, (2005)
  • [4] Kearns MJ, Li M., Learning in the presence of malicious errors, SIAM Journal on Computing, 22, 4, pp. 807-837, (1993)
  • [5] Szegedy C, Zaremba W, Sutskever I, Et al., Intriguing properties of neural networks, Proc. of the Int'l Conf. on Learning Representations, (2014)
  • [6] Carlini N, Wagner D., Towards evaluating the robustness of neural networks, Proc. of the IEEE Symp. on Security and Privacy (SP), pp. 39-57, (2017)
  • [7] Papernot N, Mcdaniel P, Sinha A, Et al., SoK: Security and privacy in machine learning, Proc. of the IEEE European Symp. on Security and Privacy, pp. 399-414, (2018)
  • [8] Papernot N, Mcdaniel PD, Jha S, Et al., The limitations of deep learning in adversarial settings, Proc. of the IEEE European Symp. on Security and Privacy, pp. 372-387, (2016)
  • [9] Song L, Ma CG, Duan GH., Machine learning security and privacy: A survey, Chinese Journal of Network and Information Security, 4, 8, pp. 1-11, (2018)
  • [10] Kurakin A, Goodfellow IJ, Bengio S, Et al., Adversarial machine learning at scale, Proc. of the Int'l Conf. on Learning Representations, (2017)