Model-Free Learning of Corridor Clearance: A Near-Term Deployment Perspective

被引:2
作者
Suo, Dajiang [1 ]
Jayawardana, Vindula [2 ,3 ]
Wu, Cathy [4 ,5 ]
机构
[1] Arizona State Univ, Polytech Sch, Mesa, AZ 85212 USA
[2] Massachusetts Inst Technol MIT, Lab Informat & Decis Syst, Cambridge, MA 02139 USA
[3] Massachusetts Inst Technol MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[4] Massachusetts Inst Technol MIT, Dept Civil & Environm Engn, Lab Informat & Decis Syst, Cambridge, MA 02139 USA
[5] Massachusetts Inst Technol MIT, Inst Data Syst & Soc, Cambridge, MA 02139 USA
关键词
Connected and automated vehicles; emergency vehicle corridor clearance; mixed autonomy; intelligent transportation systems; shock wave theory; deep reinforcement learning; EMERGENCY VEHICLES; TIME; SYSTEM;
D O I
10.1109/TITS.2023.3344473
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
An emerging public health application of connected and automated vehicle (CAV) technologies is to reduce response times of emergency medical service (EMS) by indirectly coordinating traffic. Therefore, in this work we study the CAV-assisted corridor clearance for EMS vehicles from a short term deployment perspective. Existing research on this topic often overlooks the impact of EMS vehicle disruptions on regular traffic, assumes 100% CAV penetration, relies on real-time traffic signal timing data and queue lengths at intersections, and makes various assumptions about traffic settings when deriving optimal model-based CAV control strategies. However, these assumptions pose significant challenges for near-term deployment and limit the real-world applicability of such methods. To overcome these challenges and enhance real-world applicability in near-term, we propose a model-free approach employing deep reinforcement learning (DRL) for designing CAV control strategies, showing its reduced overhead in designing and greater scalability and performance compared to model-based methods. Our qualitative analysis highlights the complexities of designing scalable EMS corridor clearance controllers for diverse traffic settings in which DRL controller provides ease of design compared to the model-based methods. In numerical evaluations, the model-free DRL controller outperforms the model-based counterpart by improving traffic flow and even improving EMS travel times in scenarios when a single CAV is present. Across 19 considered settings, the learned DRL controller excels by 25% in reducing the travel time in six instances, achieving an average improvement of 9%. These findings underscore the potential and promise of model-free DRL strategies in advancing EMS response and traffic flow coordination, with a focus on practical near-term deployment.
引用
收藏
页码:4833 / 4848
页数:16
相关论文
共 50 条
  • [21] Interactive learning for multi-finger dexterous hand: A model-free hierarchical deep reinforcement learning approach
    Li, Baojiang
    Qiu, Shengjie
    Bai, Jibo
    Wang, Bin
    Zhang, Zhekai
    Li, Liang
    Wang, Haiyan
    Wang, Xichao
    KNOWLEDGE-BASED SYSTEMS, 2024, 295
  • [22] Model-free optimization of power/efficiency tradeoffs in quantum thermal machines using reinforcement learning
    Erdman, Paolo A.
    Noe, Frank
    PNAS NEXUS, 2023, 2 (08):
  • [23] FlexPool: A Distributed Model-Free Deep Reinforcement Learning Algorithm for Joint Passengers and Goods Transportation
    Manchella, Kaushik
    Umrawal, Abhishek K.
    Aggarwal, Vaneet
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (04) : 2035 - 2047
  • [24] Non-stationarity Detection in Model-Free Reinforcement Learning via Value Function Monitoring
    Hussein, Maryem
    Keshk, Marwa
    Hussein, Aya
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT II, 2024, 14472 : 350 - 362
  • [25] Multi-Agent Pattern Formation: a Distributed Model-Free Deep Reinforcement Learning Approach
    Diallo, Elhadji Amadou Oury
    Sugawara, Toshiharu
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [26] Model-Free Real-Time EV Charging Scheduling Based on Deep Reinforcement Learning
    Wan, Zhiqiang
    Li, Hepeng
    He, Haibo
    Prokhorov, Danil
    IEEE TRANSACTIONS ON SMART GRID, 2019, 10 (05) : 5246 - 5257
  • [27] DeepPool: Distributed Model-Free Algorithm for Ride-Sharing Using Deep Reinforcement Learning
    Al-Abbasi, Abubakr O.
    Ghosh, Arnob
    Aggarwal, Vaneet
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019, 20 (12) : 4714 - 4727
  • [28] On Improving Transient Behavior and Steady-State Performance of Model-free Iterative Learning Control
    Zhang, Geng-Hao
    Chen, Cheng-Wei
    IFAC PAPERSONLINE, 2020, 53 (02): : 1433 - 1438
  • [29] Back-Stepping Experience Replay With Application to Model-Free Reinforcement Learning for a Soft Snake Robot
    Qi, Xinda
    Chen, Dong
    Li, Zhaojian
    Tan, Xiaobo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7517 - 7524
  • [30] Intelligent Navigation of a Magnetic Microrobot with Model-Free Deep Reinforcement Learning in a Real-World Environment
    Salehi, Amar
    Hosseinpour, Soleiman
    Tabatabaei, Nasrollah
    Soltani Firouz, Mahmoud
    Yu, Tingting
    MICROMACHINES, 2024, 15 (01)