Defense strategies for Adversarial Machine Learning: A survey

被引:23
作者
Bountakas, Panagiotis [1 ]
Zarras, Apostolis [1 ]
Lekidis, Alexios [1 ]
Xenakis, Christos [1 ]
机构
[1] Univ Piraeus, Dept Digital Syst, 80 Karaoli & Dimitriou, Piraeus 18534, Attica, Greece
基金
欧盟地平线“2020”;
关键词
Survey; Machine Learning; Adversarial Machine Learning; Defense methods; Computer vision; Cybersecurity; Natural Language Processing; Audio; DETECTION SYSTEMS; ATTACKS; INTRUSION; ROBUST; CLASSIFICATION; SECURITY;
D O I
10.1016/j.cosrev.2023.100573
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial Machine Learning (AML) is a recently introduced technique, aiming to deceive Machine Learning (ML) models by providing falsified inputs to render those models ineffective. Consequently, most researchers focus on detecting new AML attacks that can undermine existing ML infrastructures, overlooking at the same time the significance of defense strategies. This article constitutes a survey of the existing literature on AML attacks and defenses with a special focus on a taxonomy of recent works on AML defense techniques for different application domains, such as audio, cyber-security, NLP, and computer vision. The proposed survey also explores the methodology of the defense solutions and compares them using several criteria, such as whether they are attack- and/or domain-agnostic, deploy appropriate AML evaluation metrics, and whether they share their source code and/or their evaluation datasets. To the best of our knowledge, this article constitutes the first survey that seeks to systematize the existing knowledge focusing solely on the defense solutions against AML and providing innovative directions for future research on tackling the increasing threat of AML. & COPY; 2023 Elsevier Inc. All rights reserved.
引用
收藏
页数:20
相关论文
共 127 条
[1]   Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization [J].
Abou Khamis, Rana ;
Shafiq, M. Omair ;
Matrawy, Ashraf .
ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
[2]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[3]   Risk Assessment Using NIST SP 800-30 Revision 1 and ISO 27005 Combination Technique in Profit-Based Organization: Case Study of ZZZ Information System Application in ABC Agency [J].
Al Fikri, Muhamad ;
Putra, Fandi Aditya ;
Suryanto, Yohan ;
Ramli, Kalamullah .
FIFTH INFORMATION SYSTEMS INTERNATIONAL CONFERENCE, 2019, 161 :1206-1215
[4]   All Your Fake Detector are Belong to Us: Evaluating Adversarial Robustness of Fake-News Detectors Under Black-Box Settings [J].
Ali, Hassan ;
Khan, Muhammad Suleman ;
Alghadhban, Amer ;
Alazmi, Meshari ;
Alzamil, Ahmad ;
Al-Utaibi, Khaled ;
Qadir, Junaid .
IEEE ACCESS, 2021, 9 :81678-81692
[5]  
[Anonymous], 2011, ASIAN C MACHINE LEAR
[6]  
[Anonymous], 2006, P 2006 ACM S INF COM
[7]   Hardening machine learning denial of service (DoS) defences against adversarial attacks in IoT smart home networks [J].
Anthi, Eirini ;
Williams, Lowri ;
Laved, Amir ;
Burnap, Pete .
COMPUTERS & SECURITY, 2021, 108
[8]   Adversarial attacks on machine learning cybersecurity defences in Industrial Control Systems [J].
Anthi, Eirini ;
Williams, Lowri ;
Rhode, Matilda ;
Burnap, Pete ;
Wedgbury, Adam .
JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 58
[9]  
Apruzzese G., 2022, IEEE T DEPENDABLE SE
[10]   "Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice [J].
Apruzzese, Giovanni ;
Anderson, Hyrum S. ;
Dambra, Savino ;
Freeman, David ;
Pierazzi, Fabio ;
Roundy, Kevin .
2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, :339-364