Tuning path tracking controllers for autonomous cars using reinforcement learning

被引:0
作者
Carrasco A.V. [1 ]
Sequeira J.S. [1 ]
机构
[1] Lisbon University, Instituto Superior Técnico, Lisbon
关键词
Autonomous cars; Autonomous driving systems; Dependability; Non-smooth systems; Path tracking; Q-learning; Reinforcement learning;
D O I
10.7717/PEERJ-CS.1550
中图分类号
学科分类号
摘要
This article proposes an adaptable path tracking control system, based on reinforcement learning (RL), for autonomous cars. A four-parameter controller shapes the behaviour of the vehicle to navigate lane changes and roundabouts. The tuning of the tracker uses an 'educated' Q-Learning algorithm to minimize the lateral and steering trajectory errors, this being a key contribution of this article. The CARLA (CAR Learning to Act) simulator was used both for training and testing. The results show the vehicle is able to adapt its behaviour to the different types of reference trajectories, navigating safely with low tracking errors. The use of a robot operating system (ROS) bridge between CARLA and the tracker (i) results in a realistic system, and (ii) simplifies the replacement of CARLA by a real vehicle, as in a hardware-in-the-loop system. Another contribution of this article is the framework for the dependability of the overall architecture based on stability results of non-smooth systems, presented at the end of this article. © Copyright 2023 Vilaçca Carrasco and Silva Sequeira
引用
收藏
相关论文
共 46 条
[11]  
Devi S, Malarvezhi P, Dayana R, Vadivukkarasi K., A comprehensive survey on autonomous driving cars: a perspective view, Wireless Personal Communications, 114, pp. 2121-2133, (2020)
[12]  
Dosovitskiy A, Ros G, Codevilla F, Lopez A, Koltun V., CARLA: an open urban driving simulator, Proceedings of the 1st annual conference on robot learning, pp. 1-16, (2017)
[13]  
Farazi N, Zou B, Ahamed T, Barua L., Deep reinforcement learning and transportation research: a review, Transportation Research Interdisciplinary Perspectives, 11, (2021)
[14]  
Grigorescu S, Trasnea B, Cocias T, Macesanu G., A survey of deep learning techniques for autonomous driving, Journal of Field Robotics, 37, 3, pp. 362-386, (2020)
[15]  
Hansson S, Belin M, Lundgren B., Self-driving vehicles-an ethical overview, Philosophy & Technology, 34, pp. 1383-1408, (2021)
[16]  
Hynes A, Sapozhnikova E, Dusparic I., Optimising PID control with residual policy reinforcement learning, Proceedings 28th Irish conference on artificial intelligence and cognitive science (AICS 2020). CEUR Workshop Proc, (2020)
[17]  
Kim K, Kim J, Jeong S, Park J, Kim H., Cybersecurity for autonomous vehicles: review of attacks and defense, Computers & Security, 103, (2021)
[18]  
Kofinas P, Dounis A., Fuzzy Q-learning agent for online tuning of PID controller for DC motor speed control, Algorithms, 11, 10, (2018)
[19]  
Koh K, Cho H., A path tracking control system for autonomous mobile robots: an experimental investigation, Mechatronics, 4, 8, pp. 799-820, (1994)
[20]  
Kuutti S, Bowden R, Yaochu J, Barber P, Fallah S., A survey of deep learning applications to autonomous vehicle control, IEEE Transactions on Intelligent Transportation Systems, 22, 2, pp. 712-733, (2020)