Learning When to Use Adaptive Adversarial Image Perturbations Against Autonomous Vehicles

被引:0
|
作者
Yoon, Hyung-Jin [1 ]
Jafarnejadsani, Hamidreza [2 ]
Voulgaris, Petros [1 ]
机构
[1] Univ Nevada, Dept Mech Engn, Reno, NV 89557 USA
[2] Stevens Inst Technol, Dept Mech Engn, Hoboken, NJ 07030 USA
关键词
Perturbation methods; Autonomous vehicles; Cameras; Optimization; Generators; Streaming media; Vehicle dynamics; Adversarial machine learning; reinforcement learning; autonomous vehicle; ATTACKS;
D O I
10.1109/LRA.2023.3280813
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Deep neural network (DNN) models are widely used in autonomous vehicles for object detection using camera images. However, these models are vulnerable to adversarial image perturbations. Existing methods for generating these perturbations use the image frame as the decision variable, resulting in a computationally expensive optimization process that starts over for each new image. Few approaches have been developed for attacking online image streams while considering the physical dynamics of autonomous vehicles, their mission, and the environment. To address these challenges, we propose a multi-level stochastic optimization framework that monitors the attacker's capability to generate adversarial perturbations. Our framework introduces a binary decision attack/not attack based on the attacker's capability level to enhance its effectiveness. We evaluate our proposed framework using simulations for vision-guided autonomous vehicles and actual tests with a small indoor drone in an office environment. Our results demonstrate that our method is capable of generating real-time image attacks while monitoring the attacker's proficiency given state estimates.
引用
收藏
页码:4179 / 4186
页数:8
相关论文
共 42 条