Sharing Control Knowledge Among Heterogeneous Intersections: A Distributed Arterial Traffic Signal Coordination Method Using Multi-Agent Reinforcement Learning

被引:0
|
作者
Zhu, Hong [1 ]
Feng, Jialong [1 ]
Sun, Fengmei [1 ]
Tang, Keshuang [1 ]
Zang, Di [2 ,3 ]
Kang, Qi [4 ,5 ]
机构
[1] Tongji Univ, Coll Transportat Engn, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[2] Tongji Univ, Dept Comp Sci & Technol, Shanghai 200092, Peoples R China
[3] Tongji Univ, Serv Comp, Key Lab Embedded Syst, Minist Educ, Shanghai 200092, Peoples R China
[4] Tongji Univ, Dept Control Sci & Engn, Shanghai 201804, Peoples R China
[5] Tongji Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai 200092, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimization; Adaptation models; Process control; Reinforcement learning; Training; Stability criteria; Roads; Real-time systems; Electronic mail; Delays; Arterial traffic signal control; multi-agent reinforcement learning; proximal policy optimization; experience sharing; REAL-TIME; MODEL; SYSTEM;
D O I
10.1109/TITS.2024.3521514
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Treating each intersection as basic agent, multi-agent reinforcement learning (MARL) methods have emerged as the predominant approach for distributed adaptive traffic signal control (ATSC) in multi-intersection scenarios, such as arterial coordination. MARL-based ATSC currently faces two challenges: disturbances from the control policies of other intersections may impair the learning and control stability of the agents; and the heterogeneous features across intersections may complicate coordination efforts. To address these challenges, this study proposes a novel MARL method for distributed ATSC in arterials, termed the Distributed Controller for Heterogeneous Intersections (DCHI). The DCHI method introduces a Neighborhood Experience Sharing (NES) framework, wherein each agent utilizes both local data and shared experiences from adjacent intersections to improve its control policy. Within this framework, the neural networks of each agent are partitioned into two parts following the Knowledge Homogenizing Encapsulation (KHE) mechanism. The first part manages heterogeneous intersection features and transforms the control experiences, while the second part optimizes homogeneous control logic. Experimental results demonstrate that the proposed DCHI achieves efficiency improvements in average travel time of over 30% compared to traditional methods and yields similar performance to the centralized sharing method. Furthermore, vehicle trajectories reveal that DCHI can adaptively establish green wave bands in a distributed manner. Given its superior control performance, accommodation of heterogeneous intersections, and low reliance on information networks, DCHI could significantly advance the application of MARL-based ATSC methods in practice.
引用
收藏
页码:2760 / 2776
页数:17
相关论文
共 50 条
  • [21] Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control
    Chu, Tianshu
    Wang, Jie
    Codeca, Lara
    Li, Zhaojian
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (03) : 1086 - 1095
  • [22] A Cooperative Multi-Agent Reinforcement Learning Method Based on Coordination Degree
    Cui, Haoyan
    Zhang, Zhen
    IEEE ACCESS, 2021, 9 : 123805 - 123814
  • [23] A multi-agent reinforcement learning method with curriculum transfer for large-scale dynamic traffic signal control
    Li, Xuesi
    Li, Jingchen
    Shi, Haobin
    APPLIED INTELLIGENCE, 2023, 53 (18) : 21433 - 21447
  • [24] Traffic signal control using a cooperative EWMA-based multi-agent reinforcement learning
    Zhimin Qiao
    Liangjun Ke
    Xiaoqiang Wang
    Applied Intelligence, 2023, 53 : 4483 - 4498
  • [25] XLight: An interpretable multi-agent reinforcement learning approach for traffic signal control
    Cai, Sibin
    Fang, Jie
    Xu, Mengyun
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 273
  • [26] Traffic signal control using a cooperative EWMA-based multi-agent reinforcement learning
    Qiao, Zhimin
    Ke, Liangjun
    Wang, Xiaoqiang
    APPLIED INTELLIGENCE, 2023, 53 (04) : 4483 - 4498
  • [27] Distributed output formation tracking control of heterogeneous multi-agent systems using reinforcement learning
    Shi, Yu
    Dong, Xiwang
    Hua, Yongzhao
    Yu, Jianglong
    Ren, Zhang
    ISA TRANSACTIONS, 2023, 138 : 318 - 328
  • [28] CLlight: Enhancing representation of multi-agent reinforcement learning with contrastive learning for cooperative traffic signal control
    Fu, Xiang
    Ren, Yilong
    Jiang, Han
    Lv, Jiancheng
    Cui, Zhiyong
    Yu, Haiyang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 262
  • [29] Deep Reinforcement Learning for Ecological and Distributed Urban Traffic Signal Control with Multi-Agent Equilibrium Decision Making
    Yan, Liping
    Wang, Jing
    ELECTRONICS, 2024, 13 (10)
  • [30] Multi-agent Deep Reinforcement Learning collaborative Traffic Signal Control method considering intersection heterogeneity
    Bie, Yiming
    Ji, Yuting
    Ma, Dongfang
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 164