Personalized Decision-Making Framework for Collaborative Lane Change and Speed Control Based on Deep Reinforcement Learning

被引:0
作者
Peng, Jiankun [1 ]
Yu, Sichen [1 ]
Ge, Yuming [2 ]
Li, Shen [3 ]
Fan, Yi [1 ]
Zhou, Jiaxuan [1 ]
He, Hongwen [4 ]
机构
[1] Southeast Univ, Sch Transportat, Nanjing 211189, Peoples R China
[2] China Acad Informat & Commun Technol, Inst Technol & Stand, Beijing 100191, Peoples R China
[3] Tsinghua Univ, Sch Civil Engn, Beijing 100084, Peoples R China
[4] Beijing Inst Technol, Sch Mech Engn, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Decision making; Collaboration; Training; Safety; Aerospace electronics; Velocity control; Vehicle dynamics; Deep reinforcement learning; Autonomous vehicles; Switches; Integrated decision-making; multi-objective; driving style; deep reinforcement learning; experience replay technique; DRIVING STYLE RECOGNITION; CHANGE MANEUVERS; MODEL; VEHICLES;
D O I
10.1109/TITS.2025.3569592
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Autonomous driving (AD) is critically dependent on intelligent decision-making technology, which is the crucial ingredient in driving safety and overall vehicle performance. And comprehensive consideration of driving heterogeneity, decision synergy, and game interaction is also the cornerstones. Accordingly, this paper constructs a cooperative decision-making framework for autonomous vehicles (AVs) that integrates driving styles within a hierarchical architecture based on deep reinforcement learning (DRL). The upper layer adopts the action shielding mechanism-based dueling-double deep Q-network (D3QN) algorithm incorporating the lane advantages into shared state space to complete the prompt lane-changing (LC) decision, the lower layer applies the soft actor-3-critic (SA3C) algorithm based on the clipped triple Q-learning to provide the continuous speed adaptive control. Three personalized collaborative decision strategies are formulated for particular driving styles in multi-objective optimization preference combined with style-incentive prioritized experience replay (SIPER). The experimental results confirm that the proposed framework can satisfy the personalized driving demands in complex traffic scenarios, effectively explore the prospective LC opportunities, and enhance the driving efficiency by 35.40% with aggressive strategy and the comfort by 56.46% with defensive strategy compared with normal strategy, while maintaining the safety.
引用
收藏
页数:16
相关论文
共 53 条
[1]   Predicting and explaining lane-changing behaviour using machine learning: A comparative study [J].
Ali Y. ;
Hussain F. ;
Bliemer M.C.J. ;
Zheng Z. ;
Haque M.M. .
Transportation Research Part C: Emerging Technologies, 2022, 145
[2]   Trajectory Planning for Autonomous Vehicles Using Hierarchical Reinforcement Learning [J].
Ben Naveed, Kaleb ;
Qiao, Zhiqian ;
Dolan, John M. .
2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, :601-606
[3]   Trustworthy safety improvement for autonomous driving using reinforcement learning [J].
Cao, Zhong ;
Xu, Shaobing ;
Jiao, Xinyu ;
Peng, Huei ;
Yang, Diange .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2022, 138
[4]   Graph neural network and reinforcement learning for multi-agent cooperative control of connected autonomous vehicles [J].
Chen, Sikai ;
Dong, Jiqian ;
Ha, Paul ;
Li, Yujie ;
Labi, Samuel .
COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2021, 36 (07) :838-857
[5]   A Sigmoid-Based Car-Following Model to Improve Acceleration Stability in Traffic Oscillation and Following Failure in Free Flow [J].
Chen, Xingyu ;
Zhang, Weihua ;
Bai, Haijian ;
Jiang, Rui ;
Ding, Heng ;
Wei, Liyang .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (08) :9039-9057
[6]   Two-Dimensional Following Lane-Changing (2DF-LC): A Framework for Dynamic Decision-Making and Rapid Behavior Planning [J].
Chen, Xingyu ;
Zhang, Weihua ;
Bai, Haijian ;
Xu, Can ;
Ding, Heng ;
Huang, Wenjuan .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01) :427-445
[7]   Self-Learning Optimal Cruise Control Based on Individual Car-Following Style [J].
Chu, Hongqing ;
Guo, Lulu ;
Yan, Yongjun ;
Gao, Bingzhao ;
Chen, Hong .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (10) :6622-6633
[8]   A Decision-Making Strategy for Vehicle Autonomous Braking in Emergency via Deep Reinforcement Learning [J].
Fu, Yuchuan ;
Li, Changle ;
Yu, Fei Richard ;
Luan, Tom H. ;
Zhang, Yao .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (06) :5876-5888
[9]   Enabling Safe Autonomous Driving in Real-World City Traffic Using Multiple Criteria Decision Making [J].
Furda, Andrei ;
Vlacic, Ljubo .
IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2011, 3 (01) :4-17
[10]   Personalized Adaptive Cruise Control Based on Online Driving Style Recognition Technology and Model Predictive Control [J].
Gao, Bingzhao ;
Cai, Kunyang ;
Qu, Ting ;
Hu, Yunfeng ;
Chen, Hong .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (11) :12482-12496