Wild patterns: Ten years after the rise of adversarial machine learning

被引:755
作者
Biggio, Battista [1 ,2 ]
Roli, Fabio [1 ,2 ]
机构
[1] Univ Cagliari, Dept Elect & Elect Engn, Cagliari, Italy
[2] Pluribus One, Cagliari, Italy
基金
欧盟地平线“2020”;
关键词
Adversarial machine learning; Evasion attacks; Poisoning attacks; Adversarial examples; Secure learning; Deep learning; SECURITY; CLASSIFIERS; ROBUSTNESS; ATTACKS; CLASSIFICATION; DEFENSES;
D O I
10.1016/j.patcog.2018.07.023
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms. (C) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:317 / 331
页数:15
相关论文
共 50 条
[41]   Network Traffic Obfuscation: An Adversarial Machine Learning Approach [J].
Verma, Gunjan ;
Ciftcioglu, Ertugrul ;
Sheatsley, Ryan ;
Chan, Kevin ;
Scott, Lisa .
2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, :413-418
[42]   Towards logical specification of adversarial examples in machine learning [J].
Zeroual, Marwa ;
Hamid, Brahim ;
Adedjoumaa, Morayo ;
Jaskolka, Jason .
2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, :1575-1580
[43]   A-XAI: adversarial machine learning for trustable explainability [J].
Nishita Agrawal ;
Isha Pendharkar ;
Jugal Shroff ;
Jatin Raghuvanshi ;
Akashdip Neogi ;
Shruti Patil ;
Rahee Walambe ;
Ketan Kotecha .
AI and Ethics, 2024, 4 (4) :1143-1174
[44]   Quantum Adversarial Machine Learning: Status, Challenges and Perspectives [J].
Edwards, DeMarcus ;
Rawat, Danda B. .
2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, :128-133
[45]   Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors [J].
Han, Dongqi ;
Wang, Zhiliang ;
Zhong, Ying ;
Chen, Wenqi ;
Yang, Jiahai ;
Lu, Shuqiang ;
Shi, Xingang ;
Yin, Xia .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (08) :2632-2647
[46]   Improving the Robustness of AI-Based Malware Detection Using Adversarial Machine Learning [J].
Patil, Shruti ;
Varadarajan, Vijayakumar ;
Walimbe, Devika ;
Gulechha, Siddharth ;
Shenoy, Sushant ;
Raina, Aditya ;
Kotecha, Ketan .
ALGORITHMS, 2021, 14 (10)
[47]   Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning [J].
Tomsett, Richard ;
Widdicombe, Amy ;
Xing, Tianwei ;
Chakraborty, Supriyo ;
Julier, Simon ;
Gurram, Prudhvi ;
Rao, Raghuveer ;
Srivastava, Mani .
2018 21ST INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2018, :838-845
[48]   Adversarial Machine Learning Attacks against Intrusion Detection Systems: A Survey on Strategies and Defense [J].
Alotaibi, Afnan ;
Rassam, Murad A. .
FUTURE INTERNET, 2023, 15 (02)
[49]   Adversarial Machine Learning - Industry Perspectives [J].
Kumar, Ram Shankar Siva ;
Nystrom, Magnus ;
Lambert, John ;
Marshall, Andrew ;
Goertzel, Mario ;
Comissoneru, Andi ;
Swann, Matt ;
Xia, Sharon .
2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2020), 2020, :69-75
[50]   Adversarial Machine Learning for Spam Filters [J].
Kuchipudi, Bhargav ;
Nannapaneni, Ravi Teja ;
Liao, Qi .
15TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY AND SECURITY, ARES 2020, 2020,