Multi-objective multi-agent deep reinforcement learning to reduce bus bunching for multiline services with a shared corridor

被引:9
|
作者
Wang, Jiawei [1 ]
Sun, Lijun [1 ,2 ]
机构
[1] McGill Univ, Dept Civil Engn, Montreal, PQ H3A 0C3, Canada
[2] 492-817 Sherbrooke St West,Macdonald Engn Bldg, Montreal, PQ H3A 0C3, Canada
基金
加拿大创新基金会; 加拿大魁北克医学研究基金会;
关键词
Bus bunching; Multi-line bus control; Multi-agent system; Deep reinforcement learning; Multi-objective; STRATEGIES;
D O I
10.1016/j.trc.2023.104309
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
Bus bunching is a long-standing problem in transit operation and ruining the regularity of transit service. In a typical urban transit network setting of multiple lines with a shared corridor, bus bunching becomes more frequent as there is more uncertainty inside the shared corridor. While multi-agent reinforcement learning (MARL) has been a promising scheme for learning efficient control policy in a multi-agent system, few studies have explored its applicability in multi-line transit control scenarios. In this study, we focus on a basic transit network where there are two bus lines with a shared corridor. An efficient MARL framework is proposed to learn multi-line bus holding control to avoid bus bunching. Specifically, we design observation and reward functions that incorporate multi-line information. In addition, a preference weights producer is introduced to update the objective weights towards a good trajectory evaluation during daily transit operation. In this way, we handle the multi-objective issue in multi-line control. In experimental studies, we validate the superiority of the method in real-world bus lines. Results show that the state and reward augmented with multi-line information benefit MARL in multi-line bus control. Besides, by updating preference weights towards less passenger waiting time, the regularity of transit service is further improved.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Reducing Bus Bunching with Asynchronous Multi-Agent Reinforcement Learning
    Wang, Jiawei
    Sun, Lijun
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 426 - 433
  • [2] Multi-Agent Deep Reinforcement Learning for Resource Allocation in the Multi-Objective HetNet
    Nie, Hongrui
    Li, Shaosheng
    Liu, Yong
    IWCMC 2021: 2021 17TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2021, : 116 - 121
  • [3] Multi-Objective Dynamic Path Planning with Multi-Agent Deep Reinforcement Learning
    Tao, Mengxue
    Li, Qiang
    Yu, Junxi
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2025, 13 (01)
  • [4] Mitigating Bus Bunching via Hierarchical Multi-Agent Reinforcement Learning
    Yu, Mengdi
    Yang, Tao
    Li, Chunxiao
    Jin, Yaohui
    Xu, Yanyan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (08) : 9675 - 9692
  • [5] Dynamic holding control to avoid bus bunching: A multi-agent deep reinforcement learning framework
    Wang, Jiawei
    Sun, Lijun
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2020, 116 (116)
  • [6] A multi-objective multi-agent deep reinforcement learning approach to residential appliance scheduling
    Lu, Junlin
    Mannion, Patrick
    Mason, Karl
    IET SMART GRID, 2022, 5 (04) : 260 - 280
  • [7] Multi-objective reinforcement learning for designing ethical multi-agent environments
    Rodriguez-Soto, Manel
    Lopez-Sanchez, Maite
    Rodriguez-Aguilar, Juan A.
    NEURAL COMPUTING & APPLICATIONS, 2023,
  • [8] Multi-objective reinforcement learning for designing ethical multi-agent environments
    Rodriguez-Soto, Manel
    Lopez-Sanchez, Maite
    Rodriguez-Aguilar, Juan A.
    NEURAL COMPUTING & APPLICATIONS, 2023,
  • [9] Multi-Agent Deep Reinforcement Learning based Multi-Objective Resource Optimization in a Distributed Manufacturing System
    Shen, Xinchang
    Tham, Chen-Khong
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
  • [10] Multi-Objective Workflow Scheduling With Deep-Q-Network-Based Multi-Agent Reinforcement Learning
    Wang, Yuandou
    Liu, Hang
    Zheng, Wanbo
    Xia, Yunni
    Li, Yawen
    Chen, Peng
    Guo, Kunyin
    Xie, Hong
    IEEE ACCESS, 2019, 7 : 39974 - 39982