Adversarial Attacks and Defenses for Deployed AI Models

被引:4
作者
Gupta, Kishor Datta [1 ]
Dasgupta, Dipankar [2 ]
机构
[1] Clark Atlanta Univ, Atlanta, GA 30314 USA
[2] Univ Memphis, Memphis, TN 38152 USA
关键词
15;
D O I
10.1109/MITP.2022.3180330
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the surge in the adoption of AI/ML techniques in industry, adversarial challenges are also on the rise and defense strategies need to be configured accordingly. While it is crucial to formulate new attack methods (similar to Fuzz testing) and devise novel defense strategies for coverage and robustness, it is also imperative to recognize who is responsible for implementing, validating, and justifying the necessity of AI/ML defenses. In particular, which components of the learning system are vulnerable to what type of adversarial attacks, and the expertise needed to realize the severity of such adversarial attacks. Also, how to evaluate and address the adversarial challenges to recommend defense strategies for different applications. We would like to open a discussion on the skill set needed to examine and implement various defenses for emerging adversarial attacks.
引用
收藏
页码:37 / 41
页数:5
相关论文
共 15 条
  • [11] Li JCB, 2019, PR MACH LEARN RES, V97
  • [12] Challenges in Deploying Machine Learning: A Survey of Case Studies
    Paleyes, Andrei
    Urma, Raoul-Gabriel
    Lawrence, Neil D.
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (06)
  • [13] Samangouei P, 2018, Arxiv, DOI arXiv:1805.06605
  • [14] Tanay T, 2016, Arxiv, DOI arXiv:1608.07690
  • [15] Transferable Adversarial Perturbations
    Zhou, Wen
    Hou, Xin
    Chen, Yongjun
    Tang, Mengyun
    Huang, Xiangqi
    Gan, Xiang
    Yang, Yong
    [J]. COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 : 471 - 486