Risk-Anticipatory Autonomous Driving Strategies Considering Vehicles' Weights Based on Hierarchical Deep Reinforcement Learning

被引:2
|
作者
Chen, Di [1 ,2 ]
Li, Hao [3 ]
Jin, Zhicheng [1 ,2 ]
Tu, Huizhao [3 ]
Zhu, Meixin [4 ,5 ,6 ]
机构
[1] Tongji Univ, Coll Transportat Engn, Shanghai 201804, Peoples R China
[2] Hong Kong Polytech Univ, Dept Elect & Elect Engn, Hong Kong, Peoples R China
[3] Tongji Univ, Coll Transportat Engn, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[4] Hong Kong Univ Sci & Technol Guangzhou, Syst Hub, Guangzhou, Peoples R China
[5] Hong Kong Univ Sci & Technol, Civil & Environm Engn Dept, Hong Kong, Peoples R China
[6] Guangdong Prov Key Lab Integrated Commun Sensing, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous vehicles; decision making; driving risk; driving safety; reinforcement learning; DECISION-MAKING; MITIGATION; CRASHES; TIME; ROAD;
D O I
10.1109/TITS.2024.3458439
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Autonomous vehicles (AVs) have the potential to prevent accidents caused by drivers' errors and reduce road traffic risks. Due to the nature of heavy vehicles, whose collisions cause more serious crashes, the weights of vehicles need to be considered when making driving strategies aimed at reducing the potential risks and their consequences in the context of autonomous driving. This study develops an autonomous driving strategy based on risk anticipation, considering the weights of surrounding vehicles and using hierarchical deep reinforcement learning. A risk indicator integrating surrounding vehicles' weights, based on the risk field theory, is proposed and incorporated into autonomous driving decisions. A hybrid action space is designed to allow for left lane changes, right lane changes and car-following, which enables AVs to act more freely and realistically whenever possible. To solve the above hybrid decision-making problem, a hierarchical proximal policy optimization (HPPO) algorithm with an attention mechanism (AT-HPPO) is developed, providing great advantages in maintaining stable performance with high robustness and generalization. An indicator, potential collision energy in conflicts (PCEC), is newly proposed to evaluate the performance of the developed AV driving strategy from the perspective of the consequences of potential accidents. The performance evaluation results in simulation and dataset demonstrate that our model provides driving strategies that reduce both the likelihood and consequences of potential accidents, at the same time maintaining driving efficiency. The developed method is especially meaningful for AVs driving on highways, where heavy vehicles make up a high proportion of the traffic.
引用
收藏
页码:19605 / 19618
页数:14
相关论文
共 50 条
  • [21] A Control Strategy of Autonomous Vehicles based on Deep Reinforcement Learning
    Xia, Wei
    Li, Huiyun
    Li, Baopu
    PROCEEDINGS OF 2016 9TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 2, 2016, : 198 - 201
  • [22] Deep reinforcement learning based control for Autonomous Vehicles in CARLA
    Óscar Pérez-Gil
    Rafael Barea
    Elena López-Guillén
    Luis M. Bergasa
    Carlos Gómez-Huélamo
    Rodrigo Gutiérrez
    Alejandro Díaz-Díaz
    Multimedia Tools and Applications, 2022, 81 : 3553 - 3576
  • [23] Hierarchical Motion Planning and Tracking for Autonomous Vehicles Using Global Heuristic Based Potential Field and Reinforcement Learning Based Predictive Control
    Du, Guodong
    Zou, Yuan
    Zhang, Xudong
    Li, Zirui
    Liu, Qi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (08) : 8304 - 8323
  • [24] Decision making of autonomous vehicles in lane change scenarios: Deep reinforcement learning approaches with risk awareness
    Li, Guofa
    Yang, Yifan
    Li, Shen
    Qu, Xingda
    Lyu, Nengchao
    Li, Shengbo Eben
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2022, 134
  • [25] Implementing Deep Reinforcement Learning (DRL)-based Driving Styles for Non-Player Vehicles
    Forneris, Luca
    Pighetti, Alessandro
    Lazzaroni, Luca
    Bellotti, Francesco
    Capello, Alessio
    Cossu, Marianna
    Berta, Riccardo
    INTERNATIONAL JOURNAL OF SERIOUS GAMES, 2023, 10 (04): : 153 - 170
  • [26] Deep reinforcement learning based control for Autonomous Vehicles in CARLA
    Perez-Gil, Oscar
    Barea, Rafael
    Lopez-Guillen, Elena
    Bergasa, Luis M.
    Gomez-Huelamo, Carlos
    Gutierrez, Rodrigo
    Diaz-Diaz, Alejandro
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (03) : 3553 - 3576
  • [27] Learning Automated Driving in Complex Intersection Scenarios Based on Camera Sensors: A Deep Reinforcement Learning Approach
    Li, Guofa
    Lin, Siyan
    Li, Shen
    Qu, Xingda
    IEEE SENSORS JOURNAL, 2022, 22 (05) : 4687 - 4696
  • [28] Risk-aware controller for autonomous vehicles using model-based collision prediction and reinforcement learning
    Candela, Eduardo
    Doustaly, Olivier
    Parada, Leandro
    Feng, Felix
    Demiris, Yiannis
    Angeloudis, Panagiotis
    ARTIFICIAL INTELLIGENCE, 2023, 320
  • [29] Deep Inverse Reinforcement Learning for Behavior Prediction in Autonomous Driving: Accurate Forecasts of Vehicle Motion
    Fernando, Tharindu
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    IEEE SIGNAL PROCESSING MAGAZINE, 2021, 38 (01) : 87 - 96
  • [30] Achieving Robust Learning Outcomes in Autonomous Driving with DynamicNoise Integration in Deep Reinforcement Learning
    Shi, Haotian
    Chen, Jiale
    Zhang, Feijun
    Liu, Mingyang
    Zhou, Mengjie
    DRONES, 2024, 8 (09)