Achieving accurate trajectory predicting and tracking for autonomous vehicles via reinforcement learning-assisted control approaches

被引:5
作者
Tan, Guangwen [1 ]
Li, Mengshan [1 ]
Hou, Biyu [1 ]
Zhu, Jihong [1 ]
Guan, Lixin [1 ]
机构
[1] Gannan Normal Univ, Coll Phys & Elect Informat, Ganzhou 341000, Jiangxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous driving; Reinforcement learning; Vehicle lane-change; Tracking control;
D O I
10.1016/j.engappai.2024.108773
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In complex urban traffic scenarios, autonomous vehicles face significant challenges in adapting to diverse and dynamic traffic conditions. Reward-based reinforcement learning has emerged as an effective approach to tackle these challenges. This paper presents a novel method that combines deep reinforcement learning with automotive dynamics systems. Building upon the Double Deep Q-learning algorithm, our approach integrates a Recurrent Neural Network with Gated Recurrent Units to enhance the environmental exploration capabilities of autonomous vehicles. To obtain more precise reward values, we introduce a trajectory tracking algorithm based on a combination of proportional-integral-derivative control and feedforward control within the automotive dynamics system. The proportional-integral-derivative controller is utilized for longitudinal control, while the Error-Optimized feedforward controller enhances lateral control, thereby improving trajectory tracking accuracy. Finally, extensive simulation experiments are conducted to evaluate the proposed method, comparing it against other baseline methods in terms of vehicle following and lane-changing scenarios. The results demonstrate that our approach significantly improves both the reward values and control performance of the algorithm.
引用
收藏
页数:17
相关论文
共 50 条
[31]   Reinforcement Learning-Based High-Speed Path Following Control for Autonomous Vehicles [J].
Liu, Jia ;
Cui, Yunduan ;
Duan, Jianghua ;
Jiang, Zhengmin ;
Pan, Zhongming ;
Xu, Kun ;
Li, Huiyun .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (06) :7603-7615
[32]   REINFORCEMENT LEARNING-BASED ADAPTIVE MOTION CONTROL FOR AUTONOMOUS VEHICLES VIA ACTOR-CRITIC STRUCTURE [J].
Wang, Honghai ;
Wei, Liangfen ;
Wang, Xianchao ;
He, Shuping .
DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS-SERIES S, 2024, 17 (09) :2894-2911
[33]   Platoon control of connected autonomous vehicles: A distributed reinforcement learning method by consensus [J].
Liu, Bo ;
Ding, Zhengtao ;
Lv, Chen .
IFAC PAPERSONLINE, 2020, 53 (02) :15241-15246
[34]   Trajectory Tracking Control of Unmanned Vehicles via Front-Wheel Driving [J].
Zhou, Jie ;
Zhao, Can ;
Chen, Yunpei ;
Shi, Kaibo ;
Chen, Eryang ;
Luo, Ziqi .
DRONES, 2024, 8 (10)
[35]   Trajectory Planning in UAV-Assisted Wireless Networks via Reinforcement Learning [J].
He, Simeng ;
Zhang, Shangwei .
2022 IEEE 23RD INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE SWITCHING AND ROUTING (IEEE HPSR), 2022, :232-237
[36]   Deep Reinforcement Learning Multi-UAV Trajectory Control for Target Tracking [J].
Moon, Jiseon ;
Papaioannou, Savvas ;
Laoudias, Christos ;
Kolios, Panayiotis ;
Kim, Sunwoo .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (20) :15441-15455
[37]   Hybrid Supervised and Reinforcement Learning for Motion-Sickness-Aware Path Tracking in Autonomous Vehicles [J].
Lv, Yukang ;
Chen, Yi ;
Chen, Ziguo ;
Fan, Yuze ;
Tao, Yongchao ;
Zhao, Rui ;
Gao, Fei .
SENSORS, 2025, 25 (12)
[38]   Development of a new integrated local trajectory planning and tracking control framework for autonomous ground vehicles [J].
Li, Xiaohui ;
Sun, Zhenping ;
Cao, Dongpu ;
Liu, Daxue ;
He, Hangen .
MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2017, 87 :118-137
[39]   Reinforcement Learning for Autonomous Underwater Vehicles via Data-Informed Domain Randomization [J].
Lu, Wenjie ;
Cheng, Kai ;
Hu, Manman .
APPLIED SCIENCES-BASEL, 2023, 13 (03)
[40]   Low-level autonomous control and tracking of quadrotor using reinforcement learning [J].
Pi, Chen-Huan ;
Hu, Kai-Chun ;
Cheng, Stone ;
Wu, I-Chen .
CONTROL ENGINEERING PRACTICE, 2020, 95