Adversarial Evasion Noise Attacks Against TensorFlow Object Detection API

被引:2
|
作者
Kannan, Raadhesh [1 ]
Jian, Chin Ji [1 ]
Guo, XiaoNing [1 ]
机构
[1] Multimedia Univ, Fac Engn, Cyberjaya, Malaysia
来源
INTERNATIONAL CONFERENCE FOR INTERNET TECHNOLOGY AND SECURED TRANSACTIONS (ICITST-2020) | 2020年
关键词
component; formatting; style; styling; insert;
D O I
10.23919/ICITST51030.2020.9351331
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
TensorFlow Object Detection API is an open-source object detection machine learning program that has gained recent popularity and is being used in a variety of applications. Region-Based Fully Convolutional Network (R-FCN) and Faster Region-Based Convolutional Neural Network (Faster R-CNN) are two models of the API that are very popular in object detection. This paper compares the responses of the 2 models when trained and tested under the same datasets for the detection of potholes. The 2 models are compared in their results of evaluating datasets superimposed with simple additive noises such as impulse noise, Gaussian noise and Poisson noise. These models are also tested against different noise density levels of impulse noise to see the percentage of adversarial success. This paper shows the positive effect of low-density additive noise in terms of improving the performance of the ML models such that they could be considered to be added as a new feature vector. The datasets from the referenced paper are examined to find that some improvements such as using a higher resolution camera and placing the camera on the hood of the car with no window pane in between could be done to improve the performance of the API.
引用
收藏
页码:172 / 175
页数:4
相关论文
empty
未找到相关数据