Robust Object Detection Against Adversarial Perturbations with Gabor Filter

被引:2
作者
Karimi, Mohammad Parsa [1 ]
Amirkhani, Abdollah [1 ]
Shokouhi, Shahriar B. [2 ]
机构
[1] Iran Univ Sci & Technol, Sch Automot Engn, Tehran 1684613114, Iran
[2] Iran Univ Sci & Technol, Sch Elect Engn, Tehran 1684613114, Iran
来源
2021 29TH IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE) | 2021年
关键词
adversarial attack; deep neural network; robustness; Gabor filter; COMPUTER VISION; DEEP;
D O I
10.1109/ICEE52715.2021.9544499
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Adversarial attacks are one of the most critical threats in the machine learning field, which raises doubts about the application of deep neural networks (DNNs). Despite the recent advances in DNNs, the adversarial robustness in DNNs has yet to reach an acceptable level, especially against different kinds of perturbations. In this paper, we aim to enhance the robustness of object detection against adversarial perturbations. To this end, we adversarially train YOLOv3 model with different backbones by means of parameterized Gabor convolutional layers. To assess the robustness of our trained models, we have adopted TOG vanishing, TOG fabrication, and TOG mislabeling adversarial attacks. These perturbations are crafted on PASCAL VOC and MSCOCO datasets to simulate three types of targeted specificity, including object-vanishing, object-fabrication, and object-mislabeling, respectively. Extensive evaluations demonstrate that our model equipped with the Gabor filters gain consideration adversarial robustness in addition to the high generalization performance on clean data.
引用
收藏
页码:187 / 192
页数:6
相关论文
共 29 条
[1]  
Agarwal C, 2019, IEEE IMAGE PROC, P3801, DOI [10.1109/ICIP.2019.8803601, 10.1109/icip.2019.8803601]
[2]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[3]   GaborNet: Gabor filters with learnable parameters in deep convolutional neural network [J].
Alcksecv, Audrey ;
Bobe, Anatoly .
2019 INTERNATIONAL CONFERENCE ON ENGINEERING AND TELECOMMUNICATION (ENT), 2019,
[4]  
Avinash S., 2016, 2016 INT C INVENTIVE, V3, P1
[5]   DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation [J].
Cho, Seungju ;
Jun, Tae Joon ;
Oh, Byungsoo ;
Kim, Daeyoung .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[6]  
Chow K.H., 2020, ADVERSARIAL OBJECTNE
[7]   Understanding Object Detection Through an Adversarial Lens [J].
Chow, Ka-Ho ;
Liu, Ling ;
Gursoy, Mehmet Emre ;
Truex, Stacey ;
Wei, Wenqi ;
Wu, Yanzhao .
COMPUTER SECURITY - ESORICS 2020, PT II, 2020, 12309 :460-481
[8]   Dermatologist-level classification of skin cancer with deep neural networks [J].
Esteva, Andre ;
Kuprel, Brett ;
Novoa, Roberto A. ;
Ko, Justin ;
Swetter, Susan M. ;
Blau, Helen M. ;
Thrun, Sebastian .
NATURE, 2017, 542 (7639) :115-+
[9]   Adversarial Attacks on Deep Neural Networks for Time Series Classification [J].
Fawaz, Hassan Ismail ;
Forestier, Germain ;
Weber, Jonathan ;
Idoumghar, Lhassane ;
Muller, Pierre-Alain .
2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
[10]   Advancing drug discovery via GPU-based deep learning [J].
Gawehn, Erik ;
Hiss, Jan A. ;
Brown, J. B. ;
Schneider, Gisbert .
EXPERT OPINION ON DRUG DISCOVERY, 2018, 13 (07) :579-582