Sharing Control Knowledge Among Heterogeneous Intersections: A Distributed Arterial Traffic Signal Coordination Method Using Multi-Agent Reinforcement Learning

被引:0
|
作者
Zhu, Hong [1 ]
Feng, Jialong [1 ]
Sun, Fengmei [1 ]
Tang, Keshuang [1 ]
Zang, Di [2 ,3 ]
Kang, Qi [4 ,5 ]
机构
[1] Tongji Univ, Coll Transportat Engn, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[2] Tongji Univ, Dept Comp Sci & Technol, Shanghai 200092, Peoples R China
[3] Tongji Univ, Serv Comp, Key Lab Embedded Syst, Minist Educ, Shanghai 200092, Peoples R China
[4] Tongji Univ, Dept Control Sci & Engn, Shanghai 201804, Peoples R China
[5] Tongji Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai 200092, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimization; Adaptation models; Process control; Reinforcement learning; Training; Stability criteria; Roads; Real-time systems; Electronic mail; Delays; Arterial traffic signal control; multi-agent reinforcement learning; proximal policy optimization; experience sharing; REAL-TIME; MODEL; SYSTEM;
D O I
10.1109/TITS.2024.3521514
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Treating each intersection as basic agent, multi-agent reinforcement learning (MARL) methods have emerged as the predominant approach for distributed adaptive traffic signal control (ATSC) in multi-intersection scenarios, such as arterial coordination. MARL-based ATSC currently faces two challenges: disturbances from the control policies of other intersections may impair the learning and control stability of the agents; and the heterogeneous features across intersections may complicate coordination efforts. To address these challenges, this study proposes a novel MARL method for distributed ATSC in arterials, termed the Distributed Controller for Heterogeneous Intersections (DCHI). The DCHI method introduces a Neighborhood Experience Sharing (NES) framework, wherein each agent utilizes both local data and shared experiences from adjacent intersections to improve its control policy. Within this framework, the neural networks of each agent are partitioned into two parts following the Knowledge Homogenizing Encapsulation (KHE) mechanism. The first part manages heterogeneous intersection features and transforms the control experiences, while the second part optimizes homogeneous control logic. Experimental results demonstrate that the proposed DCHI achieves efficiency improvements in average travel time of over 30% compared to traditional methods and yields similar performance to the centralized sharing method. Furthermore, vehicle trajectories reveal that DCHI can adaptively establish green wave bands in a distributed manner. Given its superior control performance, accommodation of heterogeneous intersections, and low reliance on information networks, DCHI could significantly advance the application of MARL-based ATSC methods in practice.
引用
收藏
页码:2760 / 2776
页数:17
相关论文
共 50 条
  • [1] Distributed Signal Control of Arterial Corridors Using Multi-Agent Deep Reinforcement Learning
    Zhang, Weibin
    Yan, Chen
    Li, Xiaofeng
    Fang, Liangliang
    Wu, Yao-Jan
    Li, Jun
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (01) : 178 - 190
  • [2] Multi-Agent Transfer Reinforcement Learning With Multi-View Encoder for Adaptive Traffic Signal Control
    Ge, Hongwei
    Gao, Dongwan
    Sun, Liang
    Hou, Yaqing
    Yu, Chao
    Wang, Yuxin
    Tan, Guozhen
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) : 12572 - 12587
  • [3] Multiple intersections traffic signal control based on cooperative multi-agent reinforcement learning
    Liu, Junxiu
    Qin, Sheng
    Su, Min
    Luo, Yuling
    Wang, Yanhu
    Yang, Su
    INFORMATION SCIENCES, 2023, 647
  • [4] Multi-agent Reinforcement Learning for Traffic Signal Control
    Prabuchandran, K. J.
    Kumar, Hemanth A. N.
    Bhatnagar, Shalabh
    2014 IEEE 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2014, : 2529 - 2534
  • [5] A Multi-Agent Reinforcement Learning Based Control Method for CAVs in a Mixed Platoon
    Xu, Yaqi
    Shi, Yan
    Tong, Xiaolu
    Chen, Shanzhi
    Ge, Yuming
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (11) : 16160 - 16172
  • [6] An Improved Traffic Signal Control Method Based on Multi-agent Reinforcement Learning
    Xu, Jianyou
    Zhang, Zhichao
    Zhang, Shuo
    Miao, Jiayao
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 6612 - 6616
  • [7] A Distributed Assignment Method for Dynamic Traffic Assignment Using Heterogeneous-Adviser Based Multi-Agent Reinforcement Learning
    Pan, Zhaotian
    Qu, Zhaowei
    Chen, Yongheng
    Li, Haitao
    Wang, Xin
    IEEE ACCESS, 2020, 8 : 154237 - 154255
  • [8] Urban Traffic Control Using Distributed Multi-agent Deep Reinforcement Learning
    Kitagawa, Shunya
    Moustafa, Ahmed
    Ito, Takayuki
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2019, 11672 : 337 - 349
  • [9] A multi-agent reinforcement learning based approach for intelligent traffic signal control
    Benhamza, Karima
    Seridi, Hamid
    Agguini, Meriem
    Bentagine, Amel
    EVOLVING SYSTEMS, 2024, 15 (06) : 2383 - 2397
  • [10] Sharing of Energy Among Cooperative Households Using Distributed Multi-Agent Reinforcement Learning
    Ebell, Niklas
    Guetlein, Moritz
    Pruckner, Marco
    PROCEEDINGS OF 2019 IEEE PES INNOVATIVE SMART GRID TECHNOLOGIES EUROPE (ISGT-EUROPE), 2019,