Sharing Control Knowledge Among Heterogeneous Intersections: A Distributed Arterial Traffic Signal Coordination Method Using Multi-Agent Reinforcement Learning

被引:0
作者
Zhu, Hong [1 ]
Feng, Jialong [1 ]
Sun, Fengmei [1 ]
Tang, Keshuang [1 ]
Zang, Di [2 ,3 ]
Kang, Qi [4 ,5 ]
机构
[1] Tongji Univ, Coll Transportat Engn, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[2] Tongji Univ, Dept Comp Sci & Technol, Shanghai 200092, Peoples R China
[3] Tongji Univ, Serv Comp, Key Lab Embedded Syst, Minist Educ, Shanghai 200092, Peoples R China
[4] Tongji Univ, Dept Control Sci & Engn, Shanghai 201804, Peoples R China
[5] Tongji Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai 200092, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimization; Adaptation models; Process control; Reinforcement learning; Training; Stability criteria; Roads; Real-time systems; Electronic mail; Delays; Arterial traffic signal control; multi-agent reinforcement learning; proximal policy optimization; experience sharing; REAL-TIME; MODEL; SYSTEM;
D O I
10.1109/TITS.2024.3521514
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Treating each intersection as basic agent, multi-agent reinforcement learning (MARL) methods have emerged as the predominant approach for distributed adaptive traffic signal control (ATSC) in multi-intersection scenarios, such as arterial coordination. MARL-based ATSC currently faces two challenges: disturbances from the control policies of other intersections may impair the learning and control stability of the agents; and the heterogeneous features across intersections may complicate coordination efforts. To address these challenges, this study proposes a novel MARL method for distributed ATSC in arterials, termed the Distributed Controller for Heterogeneous Intersections (DCHI). The DCHI method introduces a Neighborhood Experience Sharing (NES) framework, wherein each agent utilizes both local data and shared experiences from adjacent intersections to improve its control policy. Within this framework, the neural networks of each agent are partitioned into two parts following the Knowledge Homogenizing Encapsulation (KHE) mechanism. The first part manages heterogeneous intersection features and transforms the control experiences, while the second part optimizes homogeneous control logic. Experimental results demonstrate that the proposed DCHI achieves efficiency improvements in average travel time of over 30% compared to traditional methods and yields similar performance to the centralized sharing method. Furthermore, vehicle trajectories reveal that DCHI can adaptively establish green wave bands in a distributed manner. Given its superior control performance, accommodation of heterogeneous intersections, and low reliance on information networks, DCHI could significantly advance the application of MARL-based ATSC methods in practice.
引用
收藏
页码:2760 / 2776
页数:17
相关论文
共 50 条
  • [41] Cooperative Multi-Agent Reinforcement Learning Framework for Edge Intelligence-Empowered Traffic Light Control
    Shi, Haiyong
    Liu, Bingyi
    Wang, Enshu
    Han, Weizhen
    Wang, Jinfan
    Cui, Shihong
    Wu, Libing
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (04) : 7373 - 7384
  • [42] Regional Multi-Agent Cooperative Reinforcement Learning for City-Level Traffic Grid Signal Control
    Li, Yisha
    Zhang, Ya
    Li, Xinde
    Sun, Changyin
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (09) : 1987 - 1998
  • [43] A Distributed Multi-Agent Reinforcement Learning With Graph Decomposition Approach for Large-Scale Adaptive Traffic Signal Control
    Jiang, Shan
    Huang, Yufei
    Jafari, Mohsen
    Jalayer, Mohammad
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 14689 - 14701
  • [44] Dynamic traffic signal control for heterogeneous traffic conditions using Max Pressure and Reinforcement Learning
    Agarwal, Amit
    Sahu, Deorishabh
    Mohata, Rishabh
    Jeengar, Kuldeep
    Nautiyal, Anuj
    Saxena, Dhish Kumar
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 254
  • [45] A multi-agent reinforcement learning method with curriculum transfer for large-scale dynamic traffic signal control
    Xuesi Li
    Jingchen Li
    Haobin Shi
    Applied Intelligence, 2023, 53 : 21433 - 21447
  • [46] CVDMARL: A Communication-Enhanced Value Decomposition Multi-Agent Reinforcement Learning Traffic Signal Control Method
    Chang, Ande
    Ji, Yuting
    Wang, Chunguang
    Bie, Yiming
    SUSTAINABILITY, 2024, 16 (05)
  • [47] Distributed Cooperative Multi-Agent Reinforcement Learning with Directed Coordination Graph
    Jing, Gangshan
    Bai, He
    George, Jemin
    Chakrabortty, Aranya
    Sharma, Piyush K.
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 3273 - 3278
  • [48] Distributed Transmission Control for Wireless Networks using Multi-Agent Reinforcement Learning
    Farquhar, Collin
    Kumar, Prem
    Jagannath, Anu
    Jagannath, Jithin
    BIG DATA IV: LEARNING, ANALYTICS, AND APPLICATIONS, 2022, 12097
  • [49] IALight: Importance-Aware Multi-Agent Reinforcement Learning for Arterial Traffic Cooperative Control
    Wei, Lu
    Zhang, Xiaoyan
    Fan, Lijun
    Gao, Lei
    Yang, Jian
    PROMET-TRAFFIC & TRANSPORTATION, 2025, 37 (01): : 151 - 169
  • [50] AGRCNet: communicate by attentional graph relations in multi-agent reinforcement learning for traffic signal control
    Ma, Tinghuai
    Peng, Kexing
    Rong, Huan
    Qian, Yurong
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (28) : 21007 - 21022