Wild patterns: Ten years after the rise of adversarial machine learning

被引:707
作者
Biggio, Battista [1 ,2 ]
Roli, Fabio [1 ,2 ]
机构
[1] Univ Cagliari, Dept Elect & Elect Engn, Cagliari, Italy
[2] Pluribus One, Cagliari, Italy
基金
欧盟地平线“2020”;
关键词
Adversarial machine learning; Evasion attacks; Poisoning attacks; Adversarial examples; Secure learning; Deep learning; SECURITY; CLASSIFIERS; ROBUSTNESS; ATTACKS; CLASSIFICATION; DEFENSES;
D O I
10.1016/j.patcog.2018.07.023
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms. (C) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:317 / 331
页数:15
相关论文
共 50 条
  • [21] Ten years of image analysis and machine learning competitions in dementia
    Bron, Esther E.
    Klein, Stefan
    Reinke, Annika
    Papma, Janne M.
    Maier-Hein, Lena
    Alexander, Daniel C.
    Oxtoby, Neil P.
    NEUROIMAGE, 2022, 253
  • [22] Adversarial Machine Learning: Bayesian Perspectives
    Insua, David Rios
    Naveiro, Roi
    Gallego, Victor
    Poulos, Jason
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2023, 118 (543) : 2195 - 2206
  • [23] Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning
    Standen, Maxwell
    Kim, Junae
    Szabo, Claudia
    ACM COMPUTING SURVEYS, 2025, 57 (05)
  • [24] Adversarial Machine Learning
    Tygar, J. D.
    IEEE INTERNET COMPUTING, 2011, 15 (05) : 4 - 6
  • [25] Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics
    Ma, Yuxin
    Xie, Tiankai
    Li, Jundong
    Maciejewski, Ross
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (01) : 1075 - 1085
  • [26] Adversarial machine learning for network intrusion detection: A comparative study
    Jmila, Houda
    Ibn Khedher, Mohamed
    COMPUTER NETWORKS, 2022, 214
  • [27] Closeness and uncertainty aware adversarial examples detection in adversarial machine learning
    Tuna, Omer Faruk
    Catak, Ferhat Ozgur
    Eskil, M. Taner
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 101
  • [28] Adversarial Machine Learning for Network Intrusion Detection Systems: A Comprehensive Survey
    He, Ke
    Kim, Dan Dongseong
    Asghar, Muhammad Rizwan
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (01): : 538 - 566
  • [29] Adversarial Machine Learning for NextG Covert Communications Using Multiple Antennas
    Kim, Brian
    Sagduyu, Yalin
    Davaslioglu, Kemal
    Erpek, Tugba
    Ulukus, Sennur
    ENTROPY, 2022, 24 (08)
  • [30] A state-of-the-art review on adversarial machine learning in image classification
    Bajaj, Ashish
    Vishwakarma, Dinesh Kumar
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (03) : 9351 - 9416