V2X and Deep Reinforcement Learning-Aided Mobility-Aware Lane Changing for Emergency Vehicle Preemption in Connected Autonomous Transport Systems

被引:3
|
作者
Ding, Cao [1 ]
Ho, Ivan Wang-Hei [1 ,2 ]
Chung, Edward [1 ]
Fan, Tingting [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Elect & Elect Engn, Hong Kong, Peoples R China
[2] Otto Poon Charitable Fdn Smart Cities Res Inst, Hong Kong, Peoples R China
关键词
Vehicle-to-everything; Quality of service; Safety; Delays; Data models; Peer-to-peer computing; Numerical models; V2X; vehicular networks; lane change; emergency vehicle preemption; CHANGE DECISION-MAKING; DISSEMINATION;
D O I
10.1109/TITS.2024.3350334
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Emergency vehicle preemption (EVP) aims to provide the right-of-way to emergency vehicles (EVs) so that they can travel to the incident location efficiently. The travel time of EVs is the most important indicator of EVP efficiency, which should be minimized by distinct methods or algorithms. However, conventional EVP methods using strobe emitters, light emitters or sirens performs poorly in high-density vehicular traffic. Vehicle-to-everything (V2X) communication plays a pivotal role in intelligent transportation systems (ITS), which can assist EVs to travel safely and efficiently in connected autonomous transport systems (CATS). Enabled by V2X, this paper proposes a deep reinforcement learning-aided mobility-aware lane change algorithm (DRL-MLC) to enhance the efficiency of EVP. In the first stage, the EV learns to change lane based on a policy-based deep reinforcement learning (DRL) algorithm to find the shortest trajectory. In the second stage, autonomous vehicles (AVs) perform mobility-aware lane changing (MLC) to make way for the EV based on the emergency messages (EM) they received. Note that the performance of DRL-MLC strongly relies on the quality of service (QoS) of V2X, and improper network parameters of the on-board units (OBUs) that do not match with the vehicular density will significantly degrade the QoS. Therefore, in the third stage, the proposed algorithm fine-tunes specific parameters including communication range, carrier sensing range, packet rate, and contention window according to the real-time vehicular density based on a curve-fitting optimization method. Our results indicate that at medium-to-high density (e.g., 0.15 veh/m), the average speed of DRL-MLC has more than 49% average improvement than traditional lane changing models, and the ten-minute target travel time for EVs can be achieved by 95% with the proposed algorithm.
引用
收藏
页码:7281 / 7293
页数:13
相关论文
共 8 条
  • [1] Emergency Vehicle Aware Lane Change Decision Model for Autonomous Vehicles Using Deep Reinforcement Learning
    Alzubaidi, Ahmed
    Al Sumaiti, Ameena Saad
    Byon, Young-Ji
    Hosani, Khalifa Al
    IEEE ACCESS, 2023, 11 : 27127 - 27137
  • [2] Deep Reinforcement Learning Aided Platoon Control Relying on V2X Information
    Lei, Lei
    Liu, Tong
    Zheng, Kan
    Hanzo, Lajos
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (06) : 5811 - 5826
  • [3] Deep Reinforcement Learning Approach for V2X Managed Intersections of Connected Vehicles
    Lombard, Alexandre
    Noubli, Ahmed
    Abbas-Turki, Abdeljalil
    Gaud, Nicolas
    Galland, Stephane
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (07) : 7178 - 7189
  • [4] A Novel Reinforcement Learning Method for Autonomous Driving With Intermittent Vehicle-to-Everything (V2X) Communications
    Chen, Longquan
    He, Ying
    Yu, F. Richard
    Pan, Weike
    Ming, Zhong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (06) : 7722 - 7732
  • [5] Deep-Reinforcement-Learning-Based Distributed Vehicle Position Controls for Coverage Expansion in mmWave V2X
    Taya, Akihito
    Nishio, Takayuki
    Morikura, Masahiro
    Yamamoto, Koji
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2019, E102B (10) : 2054 - 2065
  • [6] Deep Reinforcement Learning Enabled Energy-Efficient Resource Allocation in Energy Harvesting Aided V2X Communication
    Song, Yuqian
    Xiao, Yang
    Chen, Yaozhi
    Li, Guanyu
    Liu, Jun
    2022 IEEE 33RD ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC), 2022, : 313 - 319
  • [7] A data-driven solution for intelligent power allocation of connected hybrid electric vehicles inspired by offline deep reinforcement learning in V2X scenario
    Niu, Zegong
    He, Hongwen
    APPLIED ENERGY, 2024, 372