End-to-End Deep Learning Model for Steering Angle Control of Autonomous Vehicles

被引:9
作者
Khanum, Abida [1 ]
Lee, Chao-Yang [2 ]
Yang, Chu-Sing [1 ]
机构
[1] Natl Cheng Kung Univ, Dept Elect Engn, 1 Univ Rd, Tainan 701, Taiwan
[2] Natl Formosa Univ, Dept Aeronaut Engn, Huwei, Yunlin Country, Taiwan
来源
2020 INTERNATIONAL SYMPOSIUM ON COMPUTER, CONSUMER AND CONTROL (IS3C 2020) | 2021年
关键词
Deep Learning; Residual Neural Network; Simulation; Machine Learning; Self-Driving Car; Artificial Intelligence;
D O I
10.1109/IS3C50286.2020.00056
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recently brilliant evolutions in the machine learning research area of autonomous self-driving vehicles. Unlike a modern rule-based method, this study has been supervised on the manipulate of images end-to-end, which is deep learning. The motivation of this paper where the input to the model is the camera image and the output is the steering angle target. The model trained a Residual Neural Network (ResNet) convolutional neural network(CNN) algorithm to drive an autonomous vehicle in the simulator. Therefore, the model trained and simulation are conducted using the UDACITY platform. The simulator has two choices one is the training and the second one is autonomous. The autonomous has two tracks track _1 considered as simple and track_2 complex as compare to track 1. In our paper, we used track _1 for autonomous driving in the simulator. The training option gives the recorded dataset its control through the keyboard in the simulator. We collected about 11655 images (left, center, right) with four attributes (steering, throttle, brake, speed) and also images dataset stored in a folder and attributes dataset save as CSV file in the same path. The stored raw images and steering angle data set used in this method. We divided 80-20 data set for training and Validation as shown in Table I. Images were sequentially fed into the convolutional neural network (ResNet)to predict the driving factors for making end planning decisions and execution of autonomous motion of vehicles. The loss value of the proposed model is 0.0418 as shown in Figure 2. The method trained takes succeeded precision of 0.81% is good consent with expected performance.
引用
收藏
页码:189 / 192
页数:4
相关论文
共 10 条
[1]  
[Anonymous], CS231626 STANF U DEP
[2]  
Chen ZL, 2017, IEEE INT VEH SYM, P1856, DOI 10.1109/IVS.2017.7995975
[3]  
Chernikova A., 2019, ARXIV PREPRINT ARXIV
[4]  
del Egido J, 2018, WORKSH PHYS AG, P31
[5]  
Huval B., 2015, ARXIV PREPRINT ARXIV
[6]  
Duong MT, 2018, PROCEEDINGS OF 2018 4TH INTERNATIONAL CONFERENCE ON GREEN TECHNOLOGY AND SUSTAINABLE DEVELOPMENT (GTSD), P607, DOI 10.1109/GTSD.2018.8595533
[7]   The Security of Autonomous Driving: Threats, Defenses, and Future Directions [J].
Ren, Kui ;
Wang, Qian ;
Wang, Cong ;
Qin, Zhan ;
Lin, Xiaodong .
PROCEEDINGS OF THE IEEE, 2020, 108 (02) :357-372
[8]   Review of Deep Learning Algorithms and Architectures [J].
Shrestha, Ajay ;
Mahmood, Ausif .
IEEE ACCESS, 2019, 7 :53040-53065
[9]  
Smolyakov MV, 2018, I C APPL INF COMM TE, P165
[10]  
Yang ZY, 2018, INT C PATT RECOG, P2289, DOI 10.1109/ICPR.2018.8546189