Sharing Control Knowledge Among Heterogeneous Intersections: A Distributed Arterial Traffic Signal Coordination Method Using Multi-Agent Reinforcement Learning

被引:0
作者
Zhu, Hong [1 ]
Feng, Jialong [1 ]
Sun, Fengmei [1 ]
Tang, Keshuang [1 ]
Zang, Di [2 ,3 ]
Kang, Qi [4 ,5 ]
机构
[1] Tongji Univ, Coll Transportat Engn, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[2] Tongji Univ, Dept Comp Sci & Technol, Shanghai 200092, Peoples R China
[3] Tongji Univ, Serv Comp, Key Lab Embedded Syst, Minist Educ, Shanghai 200092, Peoples R China
[4] Tongji Univ, Dept Control Sci & Engn, Shanghai 201804, Peoples R China
[5] Tongji Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai 200092, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimization; Adaptation models; Process control; Reinforcement learning; Training; Stability criteria; Roads; Real-time systems; Electronic mail; Delays; Arterial traffic signal control; multi-agent reinforcement learning; proximal policy optimization; experience sharing; REAL-TIME; MODEL; SYSTEM;
D O I
10.1109/TITS.2024.3521514
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Treating each intersection as basic agent, multi-agent reinforcement learning (MARL) methods have emerged as the predominant approach for distributed adaptive traffic signal control (ATSC) in multi-intersection scenarios, such as arterial coordination. MARL-based ATSC currently faces two challenges: disturbances from the control policies of other intersections may impair the learning and control stability of the agents; and the heterogeneous features across intersections may complicate coordination efforts. To address these challenges, this study proposes a novel MARL method for distributed ATSC in arterials, termed the Distributed Controller for Heterogeneous Intersections (DCHI). The DCHI method introduces a Neighborhood Experience Sharing (NES) framework, wherein each agent utilizes both local data and shared experiences from adjacent intersections to improve its control policy. Within this framework, the neural networks of each agent are partitioned into two parts following the Knowledge Homogenizing Encapsulation (KHE) mechanism. The first part manages heterogeneous intersection features and transforms the control experiences, while the second part optimizes homogeneous control logic. Experimental results demonstrate that the proposed DCHI achieves efficiency improvements in average travel time of over 30% compared to traditional methods and yields similar performance to the centralized sharing method. Furthermore, vehicle trajectories reveal that DCHI can adaptively establish green wave bands in a distributed manner. Given its superior control performance, accommodation of heterogeneous intersections, and low reliance on information networks, DCHI could significantly advance the application of MARL-based ATSC methods in practice.
引用
收藏
页码:2760 / 2776
页数:17
相关论文
共 50 条
  • [31] Transfer Learning Method Using Ontology for Heterogeneous Multi-agent Reinforcement Learning
    Kono, Hitoshi
    Kamimura, Akiya
    Tomita, Kohji
    Murata, Yuta
    Suzuki, Tsuyoshi
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2014, 5 (10) : 156 - 164
  • [32] Micro Junction Agent: A Scalable Multi-agent Reinforcement Learning Method for Traffic Control
    Choi, BumKyu
    Choe, Jean Seong Bjorn
    Kim, Jong-kook
    ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 3, 2022, : 509 - 515
  • [33] Network-wide traffic signal control optimization using a multi-agent deep reinforcement learning
    Li, Zhenning
    Yu, Hao
    Zhang, Guohui
    Dong, Shangjia
    Xu, Cheng-Zhong
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2021, 125
  • [34] IHG-MA: Inductive heterogeneous graph multi-agent reinforcement learning for multi-intersection traffic signal control
    Yang, Shantian
    Yang, Bo
    Kang, Zhongfeng
    Deng, Lihui
    NEURAL NETWORKS, 2021, 139 : 265 - 277
  • [35] Swarm Reinforcement Learning for traffic signal control based on cooperative multi-agent framework
    Tahifa, Mohammed
    Boumhidi, Jaouad
    Yahyaouy, Ali
    2015 INTELLIGENT SYSTEMS AND COMPUTER VISION (ISCV), 2015,
  • [36] Extensible Hierarchical Multi-Agent Reinforcement-Learning Algorithm in Traffic Signal Control
    Zhao, Pengqian
    Yuan, Yuyu
    Guo, Ting
    APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [37] A Meta Multi-agent Reinforcement Learning Algorithm for Multi-intersection Traffic Signal Control
    Yang, Shantian
    Yang, Bo
    2021 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS DASC/PICOM/CBDCOM/CYBERSCITECH 2021, 2021, : 18 - 25
  • [38] Dynamic Arterial Coordinated Control Based on Multi-agent Reinforcement Learning
    Fang, Liangliang
    Zhang, Weibin
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 2716 - 2721
  • [39] Cooperative Multi-Agent Reinforcement Learning Framework for Edge Intelligence-Empowered Traffic Light Control
    Shi, Haiyong
    Liu, Bingyi
    Wang, Enshu
    Han, Weizhen
    Wang, Jinfan
    Cui, Shihong
    Wu, Libing
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (04) : 7373 - 7384
  • [40] Cooperative Traffic Signal Control Using a Distributed Agent-Based Deep Reinforcement Learning With Incentive Communication
    Zhou, Bin
    Zhou, Qishen
    Hu, Simon
    Ma, Dongfang
    Jin, Sheng
    Lee, Der-Horng
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (08) : 10147 - 10160