Survey of Physical Adversarial Attacks Against Object Detection Models

被引:0
作者
Cai, Wei [1 ]
Di, Xingyu [1 ]
Jiang, Xinhao [1 ]
Wang, Xin [1 ]
Gao, Weijie [1 ]
机构
[1] Missile Engineering Institute, Rocket Force University of Engineering, Xi’an,710025, China
关键词
Deep neural networks - Learning systems - Object recognition;
D O I
10.3778/j.issn.1002-8331.2310-0362
中图分类号
学科分类号
摘要
Deep learning models are highly susceptible to adversarial samples, and even minuscule image perturbations that are not perceptible to the naked eye can disable well-trained deep learning models. Recent research indicates that these perturbations can exist in the physical world. This paper provides insight into physical adversarial attacks on deep learning object detection models, clarifying the concept of physical adversarial attack and outlining the general process of such attacks on object detection. According to the different attack tasks, a series of physical adversarial attack methods against object detection networks in recent years are reviewed from vehicle detection and pedestrian detection. Other attacks against target detection models, other attack tasks and other attack methods are briefly introduced. The current challenges of physical adversarial attack are discussed, the limitations of adversarial training are leaded out, and future development directions and application prospect are suggested. © 2024 Journal of Computer Engineering and Applications Beijing Co., Ltd.; Science Press. All rights reserved.
引用
收藏
页码:61 / 75
相关论文
empty
未找到相关数据