Curriculum reinforcement learning-based drifting along a general path for autonomous vehicles

被引:0
作者
Yu, Kai [1 ]
Fu, Mengyin [1 ,2 ]
Tian, Xiaohui [1 ]
Yang, Shuaicong [1 ]
Yang, Yi [1 ]
机构
[1] Beijing Inst Technol, Sch Automat, Beijing, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Automat, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
autonomous vehicles; drifting control; reinforcement learning; curriculum learning; randomization; MPC;
D O I
10.1017/S026357472400119X
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Expert drivers possess the ability to execute high sideslip angle maneuvers, commonly known as drifting, duringracing to navigate sharp corners and execute rapid turns. However, existing model-based controllers encounter chal-lenges in handling the highly nonlinear dynamics associated with drifting along general paths. While reinforcementlearning-based methods alleviate the reliance on explicit vehicle models, training a policy directly for autonomousdrifting remains difficult due to multiple objectives. In this paper, we propose a control framework for autonomousdrifting in the general case, based on curriculum reinforcement learning. The framework empowers the vehicleto follow paths with varying curvature at high speeds, while executing drifting maneuvers during sharp corners.Specifically, we consider the vehicle's dynamics to decompose the overall task and employ curriculum learning tobreak down the training process into three stages of increasing complexity. Additionally, to enhance the generaliza-tion ability of the learned policies, we introduce randomization into sensor observation noise, actuator action noise,and physical parameters. The proposed framework is validated using the CARLA simulator, encompassing variousvehicle types and parameters. Experimental results demonstrate the effectiveness and efficiency of our framework inachieving autonomous drifting along general paths. The code is available athttps://github.com/BIT-KaiYu/drifting
引用
收藏
页码:3263 / 3280
页数:18
相关论文
共 46 条
[21]  
Il Bae, 2013, 2013 IEEE Intelligent Vehicles Symposium (IV), P1381, DOI 10.1109/IVS.2013.6629659
[22]  
Inagaki S., 1995, JSAE REV, V16, P287
[23]   Champion-level drone racing using deep reinforcement learning [J].
Kaufmann, Elia ;
Bauersfeld, Leonard ;
Loquercio, Antonio ;
Mueller, Matthias ;
Koltun, Vladlen ;
Scaramuzza, Davide .
NATURE, 2023, 620 (7976) :982-+
[24]  
Khan MA, 2018, INT BHURBAN C APPL S, P263, DOI 10.1109/IBCAST.2018.8312234
[25]  
Narvekar S, 2020, J MACH LEARN RES, V21
[26]   Bifurcation in vehicle dynamics and robust front wheel steering control [J].
Ono, E ;
Hosoe, S ;
Tuan, HD ;
Doi, S .
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 1998, 6 (03) :412-420
[27]  
Peng XB, 2018, IEEE INT CONF ROBOT, P3803, DOI 10.1109/ICCABS.2018.8541936
[28]   Exploiting Linear Structure for Precision Control of Highly Nonlinear Vehicle Dynamics [J].
Peterson, Marsie T. T. ;
Goel, Tushar ;
Gerdes, J. Christian .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (02) :1852-1862
[29]   A model-free deep reinforcement learning approach for control of exoskeleton gait patterns [J].
Rose, Lowell ;
Bazzocchi, Michael C. F. ;
Nejat, Goldie .
ROBOTICA, 2022, 40 (07) :2189-2214
[30]  
Rudin N, 2021, PR MACH LEARN RES, V164, P91