Learning scalable multi-agent coordination by spatial differentiation for traffic control

被引:18
作者
Liu, Junjia [1 ]
Zhang, Huimin [2 ]
Fu, Zhuang [1 ]
Wang, Yao [1 ]
机构
[1] Shanghai Jiao Tong Univ, State Key Lab Mech Syst & Vibrat, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Sch Mech Engn, Shanghai 200240, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-agent; Coordination mechanism; y; -Reward; Deep Reinforcement Learning; Spatial differentiation;
D O I
10.1016/j.engappai.2021.104165
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The intelligent control of the traffic signal is critical to the optimization of transportation systems. To achieve global optimal traffic efficiency in large-scale road networks, recent works have focused on coordination among intersections, which have shown promising results. However, existing studies paid more attention to observations sharing among intersections (both explicit and implicit) and did not care about the consequences after decisions. In this paper, we design a multi-agent coordination framework based on Deep Reinforcement Learning method for traffic signal control, defined as ������-Reward that includes both original ������-Reward and ������- Attention-Reward. Specifically, we propose the Spatial Differentiation method for coordination which uses the temporal-spatial information in the replay buffer to amend the reward of each action. A concise theoretical analysis that proves the proposed model can converge to Nash equilibrium is given. By extending the idea of Markov Chain to the dimension of space-time, this truly decentralized coordination mechanism replaces the graph attention method and realizes the decoupling of the road network, which is more scalable and more in line with practice. The simulation results show that the proposed model remains a state-of-the-art performance even not use a centralized setting. Code is available in https://github.com/Skylark0924/Gamma_Reward.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 51 条
[1]   Traffic-signal control reinforcement learning approach for continuous-time Markov games [J].
Aragon-Gomez, Roman ;
Clempner, Julio B. .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 89
[2]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[3]  
Busoniu L, 2010, STUD COMPUT INTELL, V310, P183
[4]  
Cho K., 2014, ARXIV14061078, DOI [DOI 10.3115/V1/D14-1179, 10.3115/v1/D14-1179]
[5]  
Chu TS, 2016, P AMER CONTR CONF, P815, DOI 10.1109/ACC.2016.7525014
[6]  
Corke P, 2005, SPRINGER TRAC ADV RO, V15, P234
[7]   Cyber risk to transportation, industrial control systems, and traffic signal controllers [J].
Ezell B.C. ;
Michael Robinson R. ;
Foytik P. ;
Jordan C. ;
Flanagan D. .
Environment Systems and Decisions, 2013, 33 (4) :508-516
[8]  
Foerster JN, 2016, ADV NEUR IN, V29
[9]  
Han Z., 2018, arXiv
[10]  
Hessel M, 2018, AAAI CONF ARTIF INTE, P3215