Adversarial defenses for object detectors based on Gabor convolutional layers

被引:9
作者
Amirkhani, Abdollah [1 ]
Karimi, Mohammad Parsa [1 ]
机构
[1] Iran Univ Sci & Technol, Sch Automot Engn, Tehran 1684613114, Iran
关键词
Machine vision; Object detection; Adversarial attack; Robust detector; ATTACKS;
D O I
10.1007/s00371-021-02256-6
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Despite their many advantages and positive features, the deep neural networks are extremely vulnerable against adversarial attacks. This drawback has substantially reduced the adversarial accuracy of the visual object detectors. To make these object detectors robust to adversarial attacks, a new Gabor filter-based method has been proposed in this paper. This method has then been applied on the YOLOv3 with different backbones, the SSD with different input sizes and on the FRCNN; and thus, six robust object detector models have been presented. In order to evaluate the efficacy of the models, they have been subjected to adversarial training via three types of targeted attacks (TOG-fabrication, TOG-vanishing, and TOG-mislabeling) and three types of untargeted random attacks (DAG, RAP, and UEA). The best average accuracy (49.6%) was achieved by the YOLOv3-d model, and for the PASCAL VOC dataset. This is far superior to the best performance and accuracy and obtained in previous works (25.4%). Empirical results show that, while the presented approach improves the adversarial accuracy of the object detector models, it does not affect the performance of these models on clean data.
引用
收藏
页码:1929 / 1944
页数:16
相关论文
共 38 条
[1]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[2]   GaborNet: Gabor filters with learnable parameters in deep convolutional neural network [J].
Alcksecv, Audrey ;
Bobe, Anatoly .
2019 INTERNATIONAL CONFERENCE ON ENGINEERING AND TELECOMMUNICATION (ENT), 2019,
[3]   Adversarial Robustness by One Bit Double Quantization for Visual Classification [J].
Aprilpyone, Maungmaung ;
Kinoshita, Yuma ;
Kiya, Hitoshi .
IEEE ACCESS, 2019, 7 :177932-177943
[4]   On the Robustness of Semantic Segmentation Models to Adversarial Attacks [J].
Arnab, Anurag ;
Miksik, Ondrej ;
Torr, Philip H. S. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :888-897
[5]   A robust framework for spoofing detection in faces using deep learning [J].
Arora, Shefali ;
Bhatia, M. P. S. ;
Mittal, Vipul .
VISUAL COMPUTER, 2022, 38 (07) :2461-2472
[6]   Deep Features for Recognizing Disguised Faces in the Wild [J].
Bansal, Ankan ;
Ranjan, Rajeev ;
Castillo, Carlos D. ;
Chellappa, Rama .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :10-16
[7]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[8]  
Cho S., 2020, 2020 INT JOINT C NEU, P1
[9]  
Chow K.H., 2020, ADVERSARIAL OBJECTNE
[10]   Understanding Object Detection Through an Adversarial Lens [J].
Chow, Ka-Ho ;
Liu, Ling ;
Gursoy, Mehmet Emre ;
Truex, Stacey ;
Wei, Wenqi ;
Wu, Yanzhao .
COMPUTER SECURITY - ESORICS 2020, PT II, 2020, 12309 :460-481