Multi-Agent Reinforcement Learning for Highway Platooning

被引:5
作者
Kolat, Mate [1 ]
Becsi, Tamas [1 ]
机构
[1] Budapest Univ Technol & Econ, Dept Control Transportat & Vehicle Syst, H-1111 Budapest, Hungary
关键词
deep learning; reinforcement learning; platooning; road traffic control; multi-agent systems; VEHICLE; GAME;
D O I
10.3390/electronics12244963
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The advent of autonomous vehicles has opened new horizons for transportation efficiency and safety. Platooning, a strategy where vehicles travel closely together in a synchronized manner, holds promise for reducing traffic congestion, lowering fuel consumption, and enhancing overall road safety. This article explores the application of Multi-Agent Reinforcement Learning (MARL) combined with Proximal Policy Optimization (PPO) to optimize autonomous vehicle platooning. We delve into the world of MARL, which empowers vehicles to communicate and collaborate, enabling real-time decision making in complex traffic scenarios. PPO, a cutting-edge reinforcement learning algorithm, ensures stable and efficient training for platooning agents. The synergy between MARL and PPO enables the development of intelligent platooning strategies that adapt dynamically to changing traffic conditions, minimize inter-vehicle gaps, and maximize road capacity. In addition to these insights, this article introduces a cooperative approach to Multi-Agent Reinforcement Learning (MARL), leveraging Proximal Policy Optimization (PPO) to further optimize autonomous vehicle platooning. This cooperative framework enhances the adaptability and efficiency of platooning strategies, marking a significant advancement in the pursuit of intelligent and responsive autonomous vehicle systems.
引用
收藏
页数:13
相关论文
共 41 条
[21]  
Lopes DR, 2019, 2019 18TH EUROPEAN CONTROL CONFERENCE (ECC), P4160, DOI [10.23919/ecc.2019.8796226, 10.23919/ECC.2019.8796226]
[22]   A sharing deep reinforcement learning method for efficient vehicle platooning control [J].
Lu, Sikai ;
Cai, Yingfeng ;
Chen, Long ;
Wang, Hai ;
Sun, Xiaoqiang ;
Jia, Yunyi .
IET INTELLIGENT TRANSPORT SYSTEMS, 2022, 16 (12) :1697-1709
[23]   The Impact of Flexible Platoon Formation Operations [J].
Maiti, Santa ;
Winter, Stephan ;
Kulik, Lars ;
Sarkar, Sudeshna .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2020, 5 (02) :229-239
[24]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[25]  
Ng AY, 2006, SPRINGER TRAC ADV RO, V21, P363
[26]  
Ogitsu T., 2012, P 9 FORMSFORMAT 2012
[27]   A Platoon Control Strategy for Autonomous Vehicles Based on Sliding-Mode Control Theory [J].
Peng, Bo ;
Yu, Dexin ;
Zhou, Huxing ;
Xiao, Xue ;
Fang, Yunfeng .
IEEE ACCESS, 2020, 8 :81776-81788
[28]  
Robinson T., 2010, P 17 WORLD C INT TRA, Vvol. 1, P12
[29]  
Shabestary SMA, 2018, IEEE INT C INTELL TR, P286, DOI 10.1109/ITSC.2018.8569549
[30]   AUTOMATIC VEHICLE CONTROL DEVELOPMENTS IN THE PATH PROGRAM [J].
SHLADOVER, SE ;
DESOER, CA ;
HEDRICK, JK ;
TOMIZUKA, M ;
WALRAND, J ;
ZHANG, WB ;
MCMAHON, DH ;
HUEI, P ;
SHEIKHOLESLAM, S ;
MCKEOWN, N .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 1991, 40 (01) :114-130