Evaluating the Impact of Adversarial Patch Attacks on YOLO Models and the Implications for Edge AI Security

被引:0
作者
Gala, D. L. [1 ]
Molleda, J. [2 ]
Usamentiaga, R. [2 ]
机构
[1] Univ Oviedo, Polytech Sch Engn, Campus Gijon, Asturias, Spain
[2] Univ Oviedo, Dept Comp Sci & Engn, Campus Gijon, Asturias, Spain
关键词
Adversarial attack; Adversarial example; Naturalistic patch; Machine learning; Deep learning; Object detection; Edge artificial intelligence;
D O I
10.1007/s10207-025-01067-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) and deep learning methods have demonstrated exceptional performance in diverse domains, including computer vision, speech and face recognition, autonomous vehicles, the Internet of Things, and cybersecurity. Despite these advances, the vulnerability of ML systems to adversarial attacks, which introduce perturbations to fool classifiers and detectors, poses significant security challenges. As the artificial intelligence (AI) industry increasingly adopts edge computing, the security of AI models becomes critical. Edge devices, constrained by limited computational resources, must run smaller models to achieve comparable latency to cloud-based models on more powerful servers. This work evaluates the impact of adversarial attacks-malicious perturbations designed to degrade the accuracy and reliability of ML models-on recent YOLO detectors from the widely used Ultralytics framework. Specifically, we focus on adversarial patches-carefully crafted patterns overlaid on an image to mislead object detectors into ignoring or misclassifying objects. We create effective naturalistic adversarial patches for recent versions of the Ultralytics YOLO models (YOLOv5, YOLOv8, YOLOv9, and YOLOv10) by upgrading an existing state-of-the-art patch generation approach designed for an earlier YOLOv4 model. We evaluate the attacks on the INRIA and on the MPII datasets across multiple YOLO models. The experimental results demonstrate high effectiveness in object detection evasion in recent models. The results suggest that larger models are more robust to adversarial attacks than their smaller counterparts. We run inference experiments on edge AI devices to highlight the response time improvement achieved by using smaller models, albeit at a higher risk of attack.
引用
收藏
页数:16
相关论文
共 30 条
[1]   2D Human Pose Estimation: New Benchmark and State of the Art Analysis [J].
Andriluka, Mykhaylo ;
Pishchulin, Leonid ;
Gehler, Peter ;
Schiele, Bernt .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :3686-3693
[2]  
Biggio B., 2013, Databases, P387
[3]   Wild patterns: Ten years after the rise of adversarial machine learning [J].
Biggio, Battista ;
Roli, Fabio .
PATTERN RECOGNITION, 2018, 84 :317-331
[4]   Defense strategies for Adversarial Machine Learning: A survey [J].
Bountakas, Panagiotis ;
Zarras, Apostolis ;
Lekidis, Alexios ;
Xenakis, Christos .
COMPUTER SCIENCE REVIEW, 2023, 49
[5]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[6]   Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach [J].
Chen, Sen ;
Xue, Minhui ;
Fan, Lingling ;
Hao, Shuang ;
Xu, Lihua ;
Zhu, Haojin ;
Li, Bo .
COMPUTERS & SECURITY, 2018, 73 :326-344
[7]   Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios [J].
Choi, Jung Im ;
Tian, Qing .
2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, :1011-1017
[8]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]   A Reliable Approach for Generating Realistic Adversarial Attack via Trust Region-Based Optimization [J].
Dhamija, Lovi ;
Bansal, Urvashi .
ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2024, 49 (09) :13203-13220