A Survey and Evaluation of Adversarial Attacks in Object Detection

被引:0
作者
Nguyen, Khoi Nguyen Tiet [1 ,2 ]
Zhang, Wenyu [3 ]
Lu, Kangkang [3 ]
Wu, Yu-Huan [4 ]
Zheng, Xingjian [4 ]
Li Tan, Hui [3 ]
Zhen, Liangli [4 ]
机构
[1] Inst Infocomm Res Agcy Sci Technol & Res ASTAR, Singapore 138632, Singapore
[2] Vin Univ, Coll Engn & Comp Sci, Hanoi 100000, Vietnam
[3] Inst Infocomm Res ASTAR, Singapore 138632, Singapore
[4] ASTAR, Inst High Performance Comp, Singapore 138632, Singapore
基金
新加坡国家研究基金会;
关键词
Object detection; Perturbation methods; Detectors; Taxonomy; Robustness; Computational modeling; Surveys; Security; Lighting; Image classification; Adversarial attacks; adversarial robustness; object detection;
D O I
10.1109/TNNLS.2025.3561225
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models achieve remarkable accuracy in computer vision tasks yet remain vulnerable to adversarial examples-carefully crafted perturbations to input images that can deceive these models into making confident but incorrect predictions. This vulnerability poses significant risks in high-stakes applications such as autonomous vehicles, security surveillance, and safety-critical inspection systems. While the existing literature extensively covers adversarial attacks in image classification, comprehensive analyses of such attacks on object detection systems remain limited. This article presents a novel taxonomic framework for categorizing adversarial attacks specific to object detection architectures, synthesizes existing robustness metrics, and provides a comprehensive empirical evaluation of state-of-the-art attack methodologies on popular object detection models, including both traditional detectors and modern detectors with vision-language pretraining. Through rigorous analysis of open-source attack implementations and their effectiveness across diverse detection architectures, we derive key insights into attack characteristics. Furthermore, we delineate critical research gaps and emerging challenges to guide future investigations in securing object detection systems against adversarial threats. Our findings establish a foundation for developing more robust detection models while highlighting the urgent need for standardized evaluation protocols in this rapidly evolving domain.
引用
收藏
页数:17
相关论文
共 84 条
[31]   A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability? [J].
Huang, Xiaowei ;
Kroening, Daniel ;
Ruan, Wenjie ;
Sharp, James ;
Sun, Youcheng ;
Thamo, Emese ;
Wu, Min ;
Yi, Xinping .
COMPUTER SCIENCE REVIEW, 2020, 37
[32]  
Ji N, 2021, Arxiv, DOI arXiv:2103.08860
[33]   Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification [J].
Khamaiseh, Samer Y. ;
Bagagem, Derek ;
Al-Alaj, Abdullah ;
Mancino, Mathew ;
Alomari, Hakam W. .
IEEE ACCESS, 2022, 10 :102266-102291
[34]   Defending Physical Adversarial Attack on Object Detection via Adversarial Patch-Feature Energy [J].
Kim, Taeheon ;
Yu, Youngjoon ;
Ro, Yong Man .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, :1905-1913
[35]   FoveaBox: Beyound Anchor-Based Object Detection [J].
Kong, Tao ;
Sun, Fuchun ;
Liu, Huaping ;
Jiang, Yuning ;
Li, Lei ;
Shi, Jianbo .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :7389-7398
[36]   YOLO with adaptive frame control for real-time object detection applications [J].
Lee, Jeonghun ;
Hwang, Kwang-il .
MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (25) :36375-36396
[37]   Universal adversarial perturbations against object detection [J].
Li, Debang ;
Zhang, Junge ;
Huang, Kaiqi .
PATTERN RECOGNITION, 2021, 110
[38]  
Li H, 2019, P AAAI WORKSH ART IN
[39]   Grounded Language-Image Pre-training [J].
Li, Liunian Harold ;
Zhang, Pengchuan ;
Zhang, Haotian ;
Yang, Jianwei ;
Li, Chunyuan ;
Zhong, Yiwu ;
Wang, Lijuan ;
Yuan, Lu ;
Zhang, Lei ;
Hwang, Jenq-Neng ;
Chang, Kai-Wei ;
Gao, Jianfeng .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :10955-10965
[40]  
Li Y., 2018, P BRIT MACH VIS C BM, P1