Emergency Vehicle Aware Lane Change Decision Model for Autonomous Vehicles Using Deep Reinforcement Learning

被引:11
作者
Alzubaidi, Ahmed [1 ]
Al Sumaiti, Ameena Saad [2 ]
Byon, Young-Ji [3 ]
Hosani, Khalifa Al [2 ]
机构
[1] Khalifa Univ, Elect Engn & Comp Sci Dept, Abu Dhabi, U Arab Emirates
[2] Khalifa Univ, Adv Power & Energy Ctr, Dept Elect Engn & Comp Sci, Abu Dhabi, U Arab Emirates
[3] Khalifa Univ Sci & Technol, Dept Civil Infrastructure & Environm Engn, Abu Dhabi, U Arab Emirates
关键词
Liquid crystal displays; Safety; Road traffic; Decision making; Behavioral sciences; Autonomous vehicles; Reinforcement learning; Deep learning; Deep reinforcement learning; autonomous vehicles; lane changes;
D O I
10.1109/ACCESS.2023.3253503
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Autonomous Vehicles (AVs) have advanced rapidly in recent years as they promise to be safe and minimize the burden coming from the driving task. AVs share the road with various categories of vehicles as Emergency Vehicles (EMVs) (e.g police and ambulance vehicles). When being approached by an active EMV, it is natural to expect all vehicles to cooperate with EMV, such that the EMV travel time is minimized. The decision-making block of an AV includes the responsibility of instructing the AV to change lanes, which is typically handled by the Lane Change Decision (LCD) model. A typical LCD model tends to overlook the presence of EMVs around, as they neglect the impact of the lane change on the EMV utility. To address this challenge, this paper proposes an Emergency Vehicle Aware LCD via utilizing Deep Reinforcement Learning. To our best knowledge, this is one of the pioneering works that propose a DRL solution for the problem, addressing important limitations that have been identified. The proposed solution was evaluated against a rule-based LCD known as MOBIL in terms of safety and level of cooperativeness with the EMV. Some key results found from the comparison between the proposed solution and MOBIL are (1) identical safety levels,(2) proposed solution is takes far less time to give up the lane when being approached by an EMV, and (3) proposed solution never blocks the path of the EMV, whereas MOBIL occasionally block the path.
引用
收藏
页码:27127 / 27137
页数:11
相关论文
共 30 条
[1]   Understanding the discretionary lane-changing behaviour in the connected environment [J].
Ali, Yasir ;
Zheng, Zuduo ;
Haque, Md Mazharul ;
Yildirimoglu, Mehmet ;
Washington, Simon .
ACCIDENT ANALYSIS AND PREVENTION, 2020, 137
[2]  
Alzubaidi A., 2022, THESIS KHALIFA U ABU
[3]  
[Anonymous], 2022, NHS AMB SERV
[4]   Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles [J].
Aradi, Szilard .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) :740-759
[5]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[6]   Decision Making in Monopoly Using a Hybrid Deep Reinforcement Learning Approach [J].
Bonjour, Trevor ;
Haliem, Marina ;
Alsalem, Aala ;
Thomas, Shilpa ;
Li, Hongyu ;
Aggarwal, Vaneet ;
Kejriwal, Mayank ;
Bhargava, Bharat .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (06) :1335-1344
[7]  
De Lone J. P, 2015, GULF NEWS
[8]   A MODEL FOR THE STRUCTURE OF LANE-CHANGING DECISIONS [J].
GIPPS, PG .
TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 1986, 20 (05) :403-414
[9]   Review of Deep Reinforcement Learning for Robot Manipulation [J].
Hai Nguyen ;
Hung Manh La .
2019 THIRD IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2019), 2019, :590-595
[10]   Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving [J].
Hoel, Carl-Johan ;
Driggs-Campbell, Katherine ;
Wolff, Krister ;
Laine, Leo ;
Kochenderfer, Mykel J. .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2020, 5 (02) :294-305