Model-Free Learning of Corridor Clearance: A Near-Term Deployment Perspective

被引:2
作者
Suo, Dajiang [1 ]
Jayawardana, Vindula [2 ,3 ]
Wu, Cathy [4 ,5 ]
机构
[1] Arizona State Univ, Polytech Sch, Mesa, AZ 85212 USA
[2] Massachusetts Inst Technol MIT, Lab Informat & Decis Syst, Cambridge, MA 02139 USA
[3] Massachusetts Inst Technol MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[4] Massachusetts Inst Technol MIT, Dept Civil & Environm Engn, Lab Informat & Decis Syst, Cambridge, MA 02139 USA
[5] Massachusetts Inst Technol MIT, Inst Data Syst & Soc, Cambridge, MA 02139 USA
关键词
Connected and automated vehicles; emergency vehicle corridor clearance; mixed autonomy; intelligent transportation systems; shock wave theory; deep reinforcement learning; EMERGENCY VEHICLES; TIME; SYSTEM;
D O I
10.1109/TITS.2023.3344473
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
An emerging public health application of connected and automated vehicle (CAV) technologies is to reduce response times of emergency medical service (EMS) by indirectly coordinating traffic. Therefore, in this work we study the CAV-assisted corridor clearance for EMS vehicles from a short term deployment perspective. Existing research on this topic often overlooks the impact of EMS vehicle disruptions on regular traffic, assumes 100% CAV penetration, relies on real-time traffic signal timing data and queue lengths at intersections, and makes various assumptions about traffic settings when deriving optimal model-based CAV control strategies. However, these assumptions pose significant challenges for near-term deployment and limit the real-world applicability of such methods. To overcome these challenges and enhance real-world applicability in near-term, we propose a model-free approach employing deep reinforcement learning (DRL) for designing CAV control strategies, showing its reduced overhead in designing and greater scalability and performance compared to model-based methods. Our qualitative analysis highlights the complexities of designing scalable EMS corridor clearance controllers for diverse traffic settings in which DRL controller provides ease of design compared to the model-based methods. In numerical evaluations, the model-free DRL controller outperforms the model-based counterpart by improving traffic flow and even improving EMS travel times in scenarios when a single CAV is present. Across 19 considered settings, the learned DRL controller excels by 25% in reducing the travel time in six instances, achieving an average improvement of 9%. These findings underscore the potential and promise of model-free DRL strategies in advancing EMS response and traffic flow coordination, with a focus on practical near-term deployment.
引用
收藏
页码:4833 / 4848
页数:16
相关论文
共 50 条
[31]   Model-Free Dynamic Operations Management for EV Battery Swapping Stations: A Deep Reinforcement Learning Approach [J].
Shalaby, Ahmed A. ;
Abdeltawab, Hussein ;
Mohamed, Yasser Abdel-Rady I. .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (08) :8371-8385
[32]   A Novel Model-Free Deep Reinforcement Learning Framework for Energy Management of a PV Integrated Energy Hub [J].
Dolatabadi, Amirhossein ;
Abdeltawab, Hussein ;
Mohamed, Yasser Abdel-Rady I. .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2023, 38 (05) :4840-4852
[33]   Energy saving evaluation of an energy efficient data center using a model-free reinforcement learning approach [J].
Bin Mahbod, Muhammad Haiqal ;
Chng, Chin Boon ;
Lee, Poh Seng ;
Chui, Chee Kong .
APPLIED ENERGY, 2022, 322
[34]   Model-free self-triggered control based on deep reinforcement learning for unknown nonlinear systems [J].
Wan, Haiying ;
Karimi, Hamid Reza ;
Luan, Xiaoli ;
Liu, Fei .
INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2023, 33 (03) :2238-2250
[35]   Optimal Power Allocation in Optical GEO Satellite Downlinks Using Model-Free Deep Learning Algorithms [J].
Kapsis, Theodore T. ;
Lyras, Nikolaos K. ;
Panagopoulos, Athanasios D. .
ELECTRONICS, 2024, 13 (03)
[36]   Deep reinforcement learning based model-free optimization for unit commitment against wind power uncertainty [J].
Xu, Guilei ;
Lin, Zhenjia ;
Wu, Qiuwei ;
Chan, Wai Kin Victor ;
Zhang, Xiao-Ping .
INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2024, 155
[37]   Online Monitoring and Model-Free Adaptive Control of Weld Penetration in VPPAW Based on Extreme Learning Machine [J].
Wu, Di ;
Chen, Huabin ;
Huang, Yiming ;
Chen, Shanben .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (05) :2732-2740
[38]   Model-free voltage control of active distribution system with PVs using surrogate model-based deep reinforcement learning [J].
Cao, Di ;
Zhao, Junbo ;
Hu, Weihao ;
Ding, Fei ;
Yu, Nanpeng ;
Huang, Qi ;
Chen, Zhe .
APPLIED ENERGY, 2022, 306
[39]   Model-free Adaptive Optimal Control of Episodic Fixed-horizon Manufacturing Processes Using Reinforcement Learning [J].
Dornheim, Johannes ;
Link, Norbert ;
Gumbsch, Peter .
INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2020, 18 (06) :1593-1604
[40]   Deep Reinforcement Learning with Inverse Jacobian based Model-Free Path Planning for Deburring in Complex Industrial Environment [J].
M. R. Rahul ;
Shital S. Chiddarwar .
Journal of Intelligent & Robotic Systems, 2024, 110