Defense strategies for Adversarial Machine Learning: A survey

被引:16
|
作者
Bountakas, Panagiotis [1 ]
Zarras, Apostolis [1 ]
Lekidis, Alexios [1 ]
Xenakis, Christos [1 ]
机构
[1] Univ Piraeus, Dept Digital Syst, 80 Karaoli & Dimitriou, Piraeus 18534, Attica, Greece
基金
欧盟地平线“2020”;
关键词
Survey; Machine Learning; Adversarial Machine Learning; Defense methods; Computer vision; Cybersecurity; Natural Language Processing; Audio; DETECTION SYSTEMS; ATTACKS; INTRUSION; ROBUST; CLASSIFICATION; SECURITY;
D O I
10.1016/j.cosrev.2023.100573
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial Machine Learning (AML) is a recently introduced technique, aiming to deceive Machine Learning (ML) models by providing falsified inputs to render those models ineffective. Consequently, most researchers focus on detecting new AML attacks that can undermine existing ML infrastructures, overlooking at the same time the significance of defense strategies. This article constitutes a survey of the existing literature on AML attacks and defenses with a special focus on a taxonomy of recent works on AML defense techniques for different application domains, such as audio, cyber-security, NLP, and computer vision. The proposed survey also explores the methodology of the defense solutions and compares them using several criteria, such as whether they are attack- and/or domain-agnostic, deploy appropriate AML evaluation metrics, and whether they share their source code and/or their evaluation datasets. To the best of our knowledge, this article constitutes the first survey that seeks to systematize the existing knowledge focusing solely on the defense solutions against AML and providing innovative directions for future research on tackling the increasing threat of AML. & COPY; 2023 Elsevier Inc. All rights reserved.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Deep learning in image reconstruction: vulnerability under adversarial attacks and potential defense strategies
    Zhang, Chengzhu
    Li, Yinsheng
    Chen, Guang-Hong
    MEDICAL IMAGING 2021: PHYSICS OF MEDICAL IMAGING, 2021, 11595
  • [32] FriendlyFoe: Adversarial Machine Learning as a Practical Architectural Defense against Side Channel Attacks
    Nam, Hyoungwook
    Pothukuchi, Raghavendra Pradyumna
    Li, Bo
    Kim, Nam Sung
    Torrellas, Josep
    PROCEEDINGS OF THE 2024 THE INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES, PACT 2024, 2024, : 338 - 350
  • [33] Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense
    Chen, Lingwei
    Ye, Yanfang
    Bourlai, Thirimachos
    2017 EUROPEAN INTELLIGENCE AND SECURITY INFORMATICS CONFERENCE (EISIC), 2017, : 99 - 106
  • [34] Adversarial Attack and Defense on Graph Data: A Survey
    Sun, Lichao
    Dou, Yingtong
    Yang, Carl
    Zhang, Kai
    Wang, Ji
    Yu, Philip S.
    He, Lifang
    Li, Bo
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (08) : 7693 - 7711
  • [35] ENSEMBLE ADVERSARIAL TRAINING BASED DEFENSE AGAINST ADVERSARIAL ATTACKS FOR MACHINE LEARNING-BASED INTRUSION DETECTION SYSTEM
    Haroon, M. S.
    Ali, H. M.
    NEURAL NETWORK WORLD, 2023, 33 (05) : 317 - 336
  • [36] Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey
    Wang, Zhilin
    Kang, Qiao
    Zhang, Xinyi
    Hu, Qin
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 548 - 553
  • [37] Machine Learning in Adversarial Settings
    McDaniel, Patrick
    Papernot, Nicolas
    Celik, Z. Berkay
    IEEE SECURITY & PRIVACY, 2016, 14 (03) : 68 - 72
  • [38] Quantum adversarial machine learning
    Lu, Sirui
    Duan, Lu-Ming
    Deng, Dong-Ling
    PHYSICAL REVIEW RESEARCH, 2020, 2 (03):
  • [39] Adversarial Machine Learning for Text
    Lee, Daniel
    Verma, Rakesh
    PROCEEDINGS OF THE SIXTH INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA'20), 2020, : 33 - 34
  • [40] On the Economics of Adversarial Machine Learning
    Merkle, Florian
    Samsinger, Maximilian
    Schottle, Pascal
    Pevny, Tomas
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4670 - 4685