A Survey and Evaluation of Adversarial Attacks in Object Detection

被引:0
作者
Nguyen, Khoi Nguyen Tiet [1 ,2 ]
Zhang, Wenyu [3 ]
Lu, Kangkang [3 ]
Wu, Yu-Huan [4 ]
Zheng, Xingjian [4 ]
Li Tan, Hui [3 ]
Zhen, Liangli [4 ]
机构
[1] Inst Infocomm Res Agcy Sci Technol & Res ASTAR, Singapore 138632, Singapore
[2] Vin Univ, Coll Engn & Comp Sci, Hanoi 100000, Vietnam
[3] Inst Infocomm Res ASTAR, Singapore 138632, Singapore
[4] ASTAR, Inst High Performance Comp, Singapore 138632, Singapore
基金
新加坡国家研究基金会;
关键词
Object detection; Perturbation methods; Detectors; Taxonomy; Robustness; Computational modeling; Surveys; Security; Lighting; Image classification; Adversarial attacks; adversarial robustness; object detection;
D O I
10.1109/TNNLS.2025.3561225
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models achieve remarkable accuracy in computer vision tasks yet remain vulnerable to adversarial examples-carefully crafted perturbations to input images that can deceive these models into making confident but incorrect predictions. This vulnerability poses significant risks in high-stakes applications such as autonomous vehicles, security surveillance, and safety-critical inspection systems. While the existing literature extensively covers adversarial attacks in image classification, comprehensive analyses of such attacks on object detection systems remain limited. This article presents a novel taxonomic framework for categorizing adversarial attacks specific to object detection architectures, synthesizes existing robustness metrics, and provides a comprehensive empirical evaluation of state-of-the-art attack methodologies on popular object detection models, including both traditional detectors and modern detectors with vision-language pretraining. Through rigorous analysis of open-source attack implementations and their effectiveness across diverse detection architectures, we derive key insights into attack characteristics. Furthermore, we delineate critical research gaps and emerging challenges to guide future investigations in securing object detection systems against adversarial threats. Our findings establish a foundation for developing more robust detection models while highlighting the urgent need for standardized evaluation protocols in this rapidly evolving domain.
引用
收藏
页数:17
相关论文
共 84 条
[1]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[2]   A survey on adversarial attacks and defenses for object detection and their applications in autonomous vehicles [J].
Amirkhani, Abdollah ;
Karimi, Mohammad Parsa ;
Banitalebi-Dehkordi, Amin .
VISUAL COMPUTER, 2023, 39 (11) :5293-5307
[3]  
Arani E., 2022, Trans. Mach. Learn. Res.
[4]  
Athalye A, 2018, PR MACH LEARN RES, V80
[5]  
Bai Yutong, 2021, Advances in Neural Information Processing Systems, V34
[6]   Understanding Robustness of Transformers for Image Classification [J].
Bhojanapalli, Srinadh ;
Chakrabarti, Ayan ;
Glasner, Daniel ;
Li, Daliang ;
Unterthiner, Thomas ;
Veit, Andreas .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :10211-10221
[7]  
Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, DOI 10.48550/ARXIV.2004.10934]
[8]   Ensemble-based Blackbox Attacks on Dense Prediction [J].
Cai, Zikui ;
Tan, Yaoteng ;
Asif, M. Salman .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :4045-4055
[9]  
Cai ZK, 2022, AAAI CONF ARTIF INTE, P149
[10]   Zero-Query Transfer Attacks on Context-Aware Object Detectors [J].
Cai, Zikui ;
Rane, Shantanu ;
Brito, Alejandro E. ;
Song, Chengyu ;
Krishnamurthy, Srikanth, V ;
Roy-Chowdhury, Amit K. ;
Asif, M. Salman .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :15004-15014