Adaptive traffic signal control using deep Q-learning: case study on optimal implementations

被引:1
作者
Pan, Guangyuan [1 ,2 ]
Muresan, Matthew [2 ]
Fu, Liping [2 ,3 ]
机构
[1] Linyi Univ, Sch Automat & Elect Engn, Linyi 276000, Shandong, Peoples R China
[2] Univ Waterloo, Dept Civil & Environm Engn, Waterloo, ON N2L3G1, Canada
[3] Wuhan Univ Technol, Intelligent Transportat Syst Res Ctr, Wuhan 430063, Hubei, Peoples R China
基金
中国国家自然科学基金; 加拿大自然科学与工程研究理事会;
关键词
deep reinforcement learning; traffic signal control; optimal control;
D O I
10.1139/cjce-2022-0273
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Deep reinforcement learning has found great successes in addressing many challenging control problems; however, realworld implementations are still scarce if not non-existent. This is primarily due to three main challenges pertaining to the implementation, the stability, the optimal settings, and a lack of knowledge on methods that can be applied to field settings. This research attempts to address these issues with an adaptive simulation-based control framework proposed specifically for the training and evaluation. The control framework has implemented simulation models to conduct an extensive sensitivity analysis on the effects of key design variables, including rewarding schemes, state spaces, and model training parameters. The feasibility of transfer learning as a training strategy is also studied on scenarios with different layouts and different driver behavior models. Complex scenarios are also evaluated and used as test cases, including multiphase ring-and-barrier control and multi-intersection control. The research has contributed a significant amount of evidence on several critical design and implementation-related questions such as input representation, data (technology) requirements, training methods, and model transferability.Key words: deep reinforcement learning, traffic signal control, optimal control
引用
收藏
页码:488 / 497
页数:10
相关论文
共 28 条
[1]   Explore artificial neural networks for solving complex hydrocarbon chemistry in turbulent reactive flows [J].
An, Jian ;
Qin, Fei ;
Zhang, Jian ;
Ren, Zhuyin .
FUNDAMENTAL RESEARCH, 2022, 2 (04) :595-603
[2]   Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events [J].
Aslani, Mohammad ;
Mesgari, Mohammad Saadi ;
Wiering, Marco .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2017, 85 :732-752
[3]   Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control [J].
Chu, Tianshu ;
Wang, Jie ;
Codeca, Lara ;
Li, Zhaojian .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (03) :1086-1095
[4]   Controlled Markov Processes With Safety State Constraints [J].
El Chamie, Mahmoud ;
Yu, Yue ;
Acikmese, Behcet ;
Ono, Masahiro .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (03) :1003-1018
[5]  
Hoque A., 2018, IEEE INTEL TRANSP SY, V10, P200, DOI [10.1109/MITS.2018.2811442, DOI 10.1109/MITS.2018.2811442]
[6]   Cooperative Control for Multi-Intersection Traffic Signal Based on Deep Reinforcement Learning and Imitation Learning [J].
Huo, Yusen ;
Tao, Qinghua ;
Hu, Jianming .
IEEE ACCESS, 2020, 8 :199573-199585
[7]   Deep reinforcement learning based worker selection for distributed machine learning enhanced edge intelligence in internet of vehicles [J].
Dong J. ;
Wu W. ;
Gao Y. ;
Wang X. ;
Si P. .
Intelligent and Converged Networks, 2020, 1 (03) :234-242
[8]   Development and Evaluation of an Adaptive Traffic Signal Control Scheme Under a Mixed-Automated Traffic Scenario [J].
Kamal, Md Abdus Samad ;
Hayakawa, Tomohisa ;
Imura, Jun-ichi .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (02) :590-602
[9]  
Kyu Beom Lee, 2019, 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). Proceedings, P7, DOI 10.1109/Deep-ML.2019.00010
[10]   Reinforcement Learning for Joint Control of Traffic Signals in a Transportation Network [J].
Lee, Jincheol ;
Chung, Jiyong ;
Sohn, Keemin .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (02) :1375-1387