Non-targeted Adversarial Attacks on Object Detection Models

被引:0
作者
Mi, Jian-Xun [1 ,2 ]
Zhao, Xiangjin [1 ,2 ]
Chen, Yongtao [3 ]
Cheng, Xiao [3 ]
Tian, Peng [3 ]
Lv, Xiaohong [3 ]
Zhong, Jiayong [3 ]
机构
[1] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing 400065, Peoples R China
[3] State Grid Chongqing Elect Power Res Inst, Chongqing 401123, Peoples R China
来源
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IX, ICIC 2024 | 2024年 / 14870卷
关键词
Adversarial Attack; Object Detection; Non-targeted Adversarial Attacks;
D O I
10.1007/978-981-97-5606-3_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks involve introducing imperceptible noise into images to induce incorrect outputs from models, serving as a means to assess model security. Object detection, as a crucial task in the field of computer vision, has garnered significant attention regarding its security. NMS mechanisms in object detection models suppress low-confidence detection boxes. So, most current adversarial attacks on object detection are for targeted attacks. However, some object detection datasets designed for specific scenarios contain only one class, rendering targeted attacks unsuitable. If the original image's category is very different from the target category, it will be hard to attack. This is true even if a large perturbation is made. This paper introduces a non-targeted adversarial attack method capable of effectively compromising object detection models. We conduct experiments on a power dataset from the national power grid, demonstrating promising results. Furthermore, we prove that utilizing the proposed UnTargeted Attack (UTA) method from this paper enables the generation of more stealthy perturbations with fewer iterations.
引用
收藏
页码:3 / 12
页数:10
相关论文
共 17 条
[1]   Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup [J].
Byun, Junyoung ;
Kwon, Myung-Joon ;
Cho, Seungju ;
Kim, Yoonji ;
Kim, Changick .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :24648-24657
[2]  
Cai ZK, 2022, ADV NEUR IN
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]  
Cheng SY, 2019, ADV NEUR IN, V32
[5]  
Goodfellow I., 2015, INT C LEARNING REPRE, P1
[6]  
Huang H., 2021, 2021 IEEE INT C MULT
[7]  
Liu Yanpei, 2017, 5 INT C LEARN REPR
[8]   Universal adversarial perturbations [J].
Moosavi-Dezfooli, Seyed-Mohsen ;
Fawzi, Alhussein ;
Fawzi, Omar ;
Frossard, Pascal .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :86-94
[9]   DeepFool: a simple and accurate method to fool deep neural networks [J].
Moosavi-Dezfooli, Seyed-Mohsen ;
Fawzi, Alhussein ;
Frossard, Pascal .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2574-2582
[10]   Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J].
Ren, Shaoqing ;
He, Kaiming ;
Girshick, Ross ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1137-1149