Deep Reinforcement Learning for Intersection Signal Control Considering Pedestrian Behavior

被引:13
作者
Han, Guangjie [1 ,2 ]
Zheng, Qi [1 ]
Liao, Lyuchao [1 ]
Tang, Penghao [1 ]
Li, Zhengrong [1 ]
Zhu, Yintian [1 ]
机构
[1] Fujian Univ Technol, Sch Transportat, Fuzhou 350118, Peoples R China
[2] Hohai Univ, Dept Informat & Commun Syst, Changzhou 213022, Peoples R China
基金
中国国家自然科学基金;
关键词
traffic signal timing; deep reinforcement learning; pedestrian behavior; TRAFFIC LIGHT CONTROL; PHASE;
D O I
10.3390/electronics11213519
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Using deep reinforcement learning to solve traffic signal control problems is a research hotspot in the intelligent transportation field. Researchers have recently proposed various solutions based on deep reinforcement learning methods for intelligent transportation problems. However, most signal control optimization takes the maximization of traffic capacity as the optimization goal, ignoring the concerns of pedestrians at intersections. To address this issue, we propose a pedestrian-considered deep reinforcement learning traffic signal control method. The method combines a reinforcement learning network and traffic signal control strategy with traffic efficiency and safety aspects. At the same time, the waiting time of pedestrians and vehicles passing through the intersection is considered, and the Discrete Traffic State Encoding (DTSE) method is applied and improved to define the more comprehensive states and rewards. In the training of the neural network, the multi-process operation method is adopted, and multiple environments are run for training simultaneously to improve the model's training efficiency. Finally, extensive simulation experiments are conducted on actual intersection scenarios using the simulation software Simulation of Urban Mobility (SUMO). The results show that compared to Dueling DQN, the waiting time due to our method decreased by 58.76% and the number of people waiting decreased by 51.54%. The proposed method can reduce both the number of people waiting and the waiting time at intersections.
引用
收藏
页数:16
相关论文
共 32 条
[1]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[2]   Psychology-Based Research on Unsafe Behavior by Pedestrians When Crossing the Street [J].
Ding, Tongqiang ;
Wang, Shengli ;
Xi, Jianfeng ;
Zheng, Lili ;
Wang, Quan .
ADVANCES IN MECHANICAL ENGINEERING, 2015, 7 (01)
[3]  
Genders W., 2016, ARXIV
[4]   Double Deep Q-Network with a Dual-Agent for Traffic Signal Control [J].
Gu, Jianfeng ;
Fang, Yong ;
Sheng, Zhichao ;
Wen, Peng .
APPLIED SCIENCES-BASEL, 2020, 10 (05)
[5]   Deep Reinforcement Learning for Intelligent Transportation Systems: A Survey [J].
Haydari, Ammar ;
Yilmaz, Yasin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (01) :11-32
[6]  
Hua Wei, 2020, ACM SIGKDD Explorations Newsletter, V22, P12, DOI 10.1145/3447556.3447565
[7]  
Hunt P., 1982, Traffic Eng Control, V23, P190
[8]   Traffic signal timing via deep reinforcement learning [J].
Li L. ;
Lv Y. ;
Wang F.-Y. .
IEEE/CAA Journal of Automatica Sinica, 2016, 3 (03) :247-254
[9]   A Deep Reinforcement Learning Network for Traffic Light Cycle Control [J].
Liang, Xiaoyuan ;
Du, Xunsheng ;
Wang, Guiling ;
Han, Zhu .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (02) :1243-1253
[10]   Time Difference Penalized Traffic Signal Timing by LSTM Q-Network to Balance Safety and Capacity at Intersections [J].
Liao, Lyuchao ;
Liu, Jierui ;
Wu, Xinke ;
Zou, Fumin ;
Pan, Jengshyang ;
Sun, Qi ;
Li, Shengbo Eben ;
Zhang, Maolin .
IEEE ACCESS, 2020, 8 :80086-80096