Multiple intersections traffic signal control based on cooperative multi-agent reinforcement learning

被引:14
作者
Liu, Junxiu [1 ]
Qin, Sheng [1 ]
Su, Min [1 ]
Luo, Yuling [1 ]
Wang, Yanhu [1 ]
Yang, Su [2 ]
机构
[1] Guangxi Normal Univ, Sch Elect & Informat Engn, Guangxi Key Lab Brain Inspired Comp & Intelligent, Guilin, Peoples R China
[2] Swansea Univ, Dept Comp Sci, Swansea, Wales
基金
中国国家自然科学基金;
关键词
Traffic signal control; Reinforcement learning; Multi-agent system; ALGORITHM; LIGHTS;
D O I
10.1016/j.ins.2023.119484
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For the multi-agent traffic signal controls, the traffic signal at each intersection is controlled by an independent agent. Since the control policy for each agent is dynamic, when the traffic scale is large, the adjustment of the agent's policy brings non-stationary effects over surrounding intersections, leading to the instability of the overall system. Therefore, there is the necessity to eliminate this non-stationarity effect to stabilize the multi-agent system. A collaborative multi agent reinforcement learning method is proposed in this work to enable the system to overcome the instability problem through a collaborative mechanism. Decentralized learning with limited communication is used to reduce the communication latency between agents. The Shapley value reward function is applied to comprehensively calculate the contribution of each agent to avoid the influence of reward function coefficient variation, thereby reducing unstable factors. The Kullback-Leibler divergence is then used to distinguish the current and historical policies, and the loss function is optimized to eliminate the environmental non-stationarity. Experimental results demonstrate that the average travel time and its standard deviation are reduced by using the Shapley value reward function and optimized loss function, respectively, and this work provides an alternative for traffic signal controls on multiple intersections.
引用
收藏
页数:12
相关论文
共 46 条
[1]   Reinforcement learning for True Adaptive traffic signal control [J].
Abdulhai, B ;
Pringle, R ;
Karakoulas, GJ .
JOURNAL OF TRANSPORTATION ENGINEERING, 2003, 129 (03) :278-285
[2]  
[Anonymous], 2008, Traffic Signal Timing Manual
[3]   Urban traffic signal control using reinforcement learning agents [J].
Balaji, P. G. ;
German, X. ;
Srinivasan, D. .
IET INTELLIGENT TRANSPORT SYSTEMS, 2010, 4 (03) :177-188
[4]   Traffic signal timing optimisation based on genetic algorithm approach, including drivers' routing [J].
Ceylan, H ;
Bell, MGH .
TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2004, 38 (04) :329-342
[5]  
Chen CC, 2020, AAAI CONF ARTIF INTE, V34, P3414
[6]   Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control [J].
Chu, Tianshu ;
Wang, Jie ;
Codeca, Lara ;
Li, Zhaojian .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (03) :1086-1095
[7]  
Cools SB, 2008, ADV INFORM KNOWL PRO, P41, DOI 10.1007/978-1-84628-982-8_3
[8]   MonitorLight: Reinforcement Learning-based Traffic Signal Control Using Mixed Pressure Monitoring [J].
Fang, Zekuan ;
Zhang, Fan ;
Wang, Ting ;
Lian, Xiang ;
Chen, Mingsong .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, :478-487
[9]   Swarm intelligence for traffic light scheduling: Application to real urban areas [J].
Garcia-Nieto, J. ;
Alba, E. ;
Carolina Olivera, A. .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2012, 25 (02) :274-283
[10]  
Krajzewicz Daniel, 2012, International journal on advances in systems and measurements, V5