A Global Object Disappearance Attack Scenario on Object Detection

被引:0
作者
Li, Zhiang [1 ]
Xiao, Xiaoling [1 ]
机构
[1] Yangtze Univ, Sch Comp Sci, Jingzhou 434023, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Task analysis; Detectors; Real-time systems; YOLO; Object recognition; Toxicology; Detection algorithms; Deep learning; Artificial intelligence; Backdoor attack; object detection; deep learning; AI security; object disappearance;
D O I
10.1109/ACCESS.2024.3435335
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) -based object detectors have achieved remarkable success, but recent research has revealed their vulnerability to backdoor attacks. The attacks cause the poisoned model to output results normally on benign images, but outputs results specified by the attacker on images inserted with a trigger. Although backdoor attacks have been extensively investigated on image classification tasks, their exploration in object detection tasks remains limited. With the increasing application of object detectors in safety-sensitive fields such as autonomous driving, backdoor attacks on object detection tasks may have serious consequences. Currently, strategies for object disappearance attack scenarios exhibit certain limitations. First, these strategies typically exhibit a one-to-one correspondence, implying that the insertion of one trigger can only result in the disappearance of one object. Second, these strategies typically necessitate the attacker's knowledge of the object's precise location information to achieve its disappearance, thereby rendering real-time insertion of triggers unfeasible. Finally, these strategies exhibit diminished attack success rates when applied to two-stage detectors. The paper presents a global object disappearance attack scenario and proposes a simple, covert, and highly effective attack strategy. Experimental evaluations are conducted on four widely-used object detection models (Yolov5s, Yolov8s, Faster R-CNN, and Libra R-CNN) using two benchmark datasets (PASCAL VOC $07+12$ and MS COCO2017) to validate the effectiveness of the proposed strategy. The results demonstrate that the success rate of this attack strategy exceeds 96%, while the poison rate is only 10%.
引用
收藏
页码:104938 / 104947
页数:10
相关论文
共 42 条
[1]  
Bochkovskiy A, 2020, Arxiv, DOI arXiv:2004.10934
[2]  
Chan Shih-Han, 2022, COMPUTER VISION ECCV, P396
[3]   Adaptive Cross Entropy for ultrasmall object detection in Computed Tomography with noisy labels [J].
Chen, Hedan ;
Tan, Weimin ;
Li, Jichun ;
Guan, Pengfei ;
Wu, Lingjie ;
Yan, Bo ;
Li, Jian ;
Wang, Yunfeng .
COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 147
[4]   BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements [J].
Chen, Xiaoyi ;
Salem, Ahmed ;
Chen, Dingfan ;
Backes, Michael ;
Ma, Shiqing ;
Shen, Qingni ;
Wu, Zhonghai ;
Zhang, Yang .
37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, :554-569
[5]  
Chen XY, 2017, Arxiv, DOI arXiv:1712.05526
[6]  
Cheng YZ, 2023, Arxiv, DOI arXiv:2307.10487
[7]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[8]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338
[9]   The PASCAL Visual Object Classes Challenge: A Retrospective [J].
Everingham, Mark ;
Eslami, S. M. Ali ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 111 (01) :98-136
[10]   Object Detection with Discriminatively Trained Part-Based Models [J].
Felzenszwalb, Pedro F. ;
Girshick, Ross B. ;
McAllester, David ;
Ramanan, Deva .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (09) :1627-1645