Multi-Agent Reinforcement Learning for Highway Platooning

被引:4
|
作者
Kolat, Mate [1 ]
Becsi, Tamas [1 ]
机构
[1] Budapest Univ Technol & Econ, Dept Control Transportat & Vehicle Syst, H-1111 Budapest, Hungary
关键词
deep learning; reinforcement learning; platooning; road traffic control; multi-agent systems; VEHICLE; GAME;
D O I
10.3390/electronics12244963
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The advent of autonomous vehicles has opened new horizons for transportation efficiency and safety. Platooning, a strategy where vehicles travel closely together in a synchronized manner, holds promise for reducing traffic congestion, lowering fuel consumption, and enhancing overall road safety. This article explores the application of Multi-Agent Reinforcement Learning (MARL) combined with Proximal Policy Optimization (PPO) to optimize autonomous vehicle platooning. We delve into the world of MARL, which empowers vehicles to communicate and collaborate, enabling real-time decision making in complex traffic scenarios. PPO, a cutting-edge reinforcement learning algorithm, ensures stable and efficient training for platooning agents. The synergy between MARL and PPO enables the development of intelligent platooning strategies that adapt dynamically to changing traffic conditions, minimize inter-vehicle gaps, and maximize road capacity. In addition to these insights, this article introduces a cooperative approach to Multi-Agent Reinforcement Learning (MARL), leveraging Proximal Policy Optimization (PPO) to further optimize autonomous vehicle platooning. This cooperative framework enhances the adaptability and efficiency of platooning strategies, marking a significant advancement in the pursuit of intelligent and responsive autonomous vehicle systems.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Hierarchical Multi-Agent Training Based on Reinforcement Learning
    Wang, Guanghua
    Li, Wenjie
    Wu, Zhanghua
    Guo, Xian
    2024 9TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS, ACIRS, 2024, : 11 - 18
  • [22] MARLYC: Multi-Agent Reinforcement Learning Yaw Control
    Kadoche, Elie
    Gourvenec, Sebastien
    Pallud, Maxime
    Levent, Tanguy
    RENEWABLE ENERGY, 2023, 217
  • [23] Shaping multi-agent systems with gradient reinforcement learning
    Buffet, Olivier
    Dutech, Alain
    Charpillet, Francois
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2007, 15 (02) : 197 - 220
  • [24] Training Cooperative Agents for Multi-Agent Reinforcement Learning
    Bhalla, Sushrut
    Subramanian, Sriram G.
    Crowley, Mark
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1826 - 1828
  • [25] Shaping multi-agent systems with gradient reinforcement learning
    Olivier Buffet
    Alain Dutech
    François Charpillet
    Autonomous Agents and Multi-Agent Systems, 2007, 15 : 197 - 220
  • [26] Experience Selection in Multi-Agent Deep Reinforcement Learning
    Wang, Yishen
    Zhang, Zongzhang
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 864 - 870
  • [27] Shapley Counterfactual Credits for Multi-Agent Reinforcement Learning
    Li, Jiahui
    Kuang, Kun
    Wang, Baoxiang
    Liu, Furui
    Chen, Long
    Wu, Fei
    Xiao, Jun
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 934 - 942
  • [28] Multi-Agent Deep Reinforcement Learning with Emergent Communication
    Simoes, David
    Lau, Nuno
    Reis, Luis Paulo
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [29] Survey of Multi-Agent Strategy Based on Reinforcement Learning
    Chen, Liang
    Guo, Ting
    Liu, Yun-ting
    Yang, Jia-ming
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 604 - 609
  • [30] MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement Learning
    Malysheva, Aleksandra
    Kudenko, Daniel
    Shpilman, Aleksei
    2019 XVI INTERNATIONAL SYMPOSIUM PROBLEMS OF REDUNDANCY IN INFORMATION AND CONTROL SYSTEMS (REDUNDANCY), 2019, : 171 - 176