Augmented Mixed Vehicular Platoon Control With Dense Communication Reinforcement Learning for Traffic Oscillation Alleviation

被引:1
作者
Li, Meng [1 ,2 ]
Cao, Zehong [3 ]
Li, Zhibin [1 ]
机构
[1] Southeast Univ, Sch Transportat, Nanjing 210096, Peoples R China
[2] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore 639798, Singapore
[3] Univ South Australia, STEM, Adelaide, SA 5000, Australia
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 22期
基金
中国国家自然科学基金;
关键词
Oscillators; Training; Fluctuations; Safety; Internet of Things; Optimization; Vehicle dynamics; Augmented Intelligence of Things (AIoT); connected autonomous vehicle (CAV); mixed vehicular platoon; reinforcement learning; traffic oscillation; CAR; VEHICLES; MODEL;
D O I
10.1109/JIOT.2024.3409618
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traffic oscillations present significant challenges to road transportation systems, resulting in reduced fuel efficiency, heightened crash risks, and severe congestion. Recently emerging Augmented Intelligence of Things (AIoT) technology holds promise for enhancing traffic flow through vehicle-road cooperation. A representative application involves using deep reinforcement learning (DRL) techniques to control connected autonomous vehicle (CAV) platoons to alleviate traffic oscillations. However, uncertainties in human-driven vehicles (HDVs) driving behavior and the random distribution of CAVs make it challenging to achieve effective traffic oscillation alleviation in the Internet of Things environment. Existing DRL-based mixed vehicular platoon control strategies underutilize downstream traffic data, impairing CAVs' ability to predict and mitigate traffic oscillations, leading to inefficient speed adjustments and discomfort. This article proposes a dense communication cooperative RL policy for mixed vehicular platoons to address these challenges. It employs a parameter-sharing structure and a dense information flow topology, enabling CAVs to proactively respond to traffic oscillations while accommodating arbitrary vehicle distributions and communication failures. Experimental results demonstrate superior performance of the proposed strategy in driving efficiency, comfort, and safety, particularly in scenarios involving multivehicle cut-ins or cut-outs and communication failures.
引用
收藏
页码:35989 / 36001
页数:13
相关论文
共 37 条
[1]   A novel distributed cooperative approach for mixed platoon consisting of connected and automated vehicles and human-driven vehicles [J].
Chen, Jianzhong ;
Liang, Huan ;
Li, Jing ;
Xu, Zhaoxin .
PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2021, 573
[2]   Enhancing Mixed Traffic Flow Safety via Connected and Autonomous Vehicle Trajectory Planning with a Reinforcement Learning Approach [J].
Cheng, Yanqiu ;
Chen, Chenxi ;
Hu, Xianbiao ;
Chen, Kuanmin ;
Tang, Qing ;
Song, Yang .
JOURNAL OF ADVANCED TRANSPORTATION, 2021, 2021
[3]   Context-aware similarity measurement of lane-changing trajectories [J].
Hamedi, Hamidreza ;
Shad, Rouzbeh .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 209
[4]   Learning the Policy for Mixed Electric Platoon Control of Automated and Human-Driven Vehicles at Signalized Intersection: A Random Search Approach [J].
Jiang, Xia ;
Zhang, Jian ;
Shi, Xiaoyu ;
Cheng, Jian .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (05) :5131-5143
[5]   Calibrating Car-Following Models by Using Trajectory Data Methodological Study [J].
Kesting, Arne ;
Treiber, Martin .
TRANSPORTATION RESEARCH RECORD, 2008, (2088) :148-156
[6]   Data-Driven Robust Predictive Control for Mixed Vehicle Platoons Using Noisy Measurement [J].
Lan, Jianglin ;
Zhao, Dezong ;
Tian, Daxin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (06) :6586-6596
[7]   Deep Reinforcement Learning Aided Platoon Control Relying on V2X Information [J].
Lei, Lei ;
Liu, Tong ;
Zheng, Kan ;
Hanzo, Lajos .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (06) :5811-5826
[8]   Toward Ubiquitous Semantic Metaverse: Challenges, Approaches, and Opportunities [J].
Li, Kai ;
Lau, Billy Pik Lik ;
Yuan, Xin ;
Ni, Wei ;
Guizani, Mohsen ;
Yuen, Chau .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (24) :21855-21872
[9]   Enhancing Car-Following Performance in Traffic Oscillations Using Expert Demonstration Reinforcement Learning [J].
Li, Meng ;
Li, Zhibin ;
Cao, Zehong .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (07) :7751-7766
[10]   Anti-Disturbance Self-Supervised Reinforcement Learning for Perturbed Car-Following System [J].
Li, Meng ;
Li, Zhibin ;
Wang, Shunchao ;
Wang, Bingtong .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (09) :11318-11331