Traffic signal optimization control method based on adaptive weighted averaged double deep Q network

被引:5
作者
Chen, Youqing [1 ]
Zhang, Huizhen [1 ]
Liu, Minglei [1 ]
Ye, Ming [1 ]
Xie, Hui [1 ]
Pan, Yubiao [1 ]
机构
[1] Huaqiao Univ, Coll Comp Sci & Technol, Xiamen 361024, Fujian, Peoples R China
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Deep learning; Double deep Q network; Intelligent transportation; Traffic signal control;
D O I
10.1007/s10489-023-04469-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a critical node and major bottleneck of the urban traffic networks, the control of traffic signals at road intersections has an essential impact on road traffic flow and congestion. Deep reinforcement learning algorithms have shown excellent control effects on traffic signal timing optimization. Still, the diversity of actual road control scenarios and real-time control requirements have put forward higher requirements on the adaptiveness of the algorithms. This paper proposes an Adaptive Weighted Averaged Double Deep Q Network (AWA-DDQN) based traffic signal optimal control method. Firstly, the formula is used to calculate the double estimator weight for updating the network model. Then, the mean value of the action evaluation is calculated by the network history parameters as the target value. Based on this, a certain number of adjacent action evaluation values are used to generate hyperparameters for weight calculation through the fully connected layer, and the number of action values for mean calculation is gradually reduced to enhance the stability of model training. Finally, simulation experiments were conducted using the traffic simulation software Vissim. The results show that the AWA-DDQN-based signal control method effectively reduces the average delay time, the average queue length and the average number of stops of vehicles compared with existing methods, and significantly improves traffic flow efficiency at intersections.
引用
收藏
页码:18333 / 18354
页数:22
相关论文
共 44 条
  • [1] Underestimation estimators to Q-learning
    Abliz, Patigul
    Ying, Shi
    [J]. INFORMATION SCIENCES, 2022, 607 : 173 - 185
  • [2] Using Reinforcement Learning to Control Traffic Signals in a Real-World Scenario: An Approach Based on Linear Function Approximation
    Alegre, Lucas N.
    Ziemke, Theresa
    Bazzan, Ana L. C.
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 9126 - 9135
  • [3] Anschel O., 2017, PR MACH LEARN RES, V70, P176
  • [4] A Coupled Vehicle-Signal Control Method at Signalized Intersections in Mixed Traffic Environment
    Du, Yu
    ShangGuan, Wei
    Chai, Linguo
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (03) : 2089 - 2100
  • [5] Deep Reinforcement Learning for Intersection Signal Control Considering Pedestrian Behavior
    Han, Guangjie
    Zheng, Qi
    Liao, Lyuchao
    Tang, Penghao
    Li, Zhengrong
    Zhu, Yintian
    [J]. ELECTRONICS, 2022, 11 (21)
  • [6] Icarte RT, 2022, J ARTIF INTELL RES, V73, P173
  • [7] Adaptive traffic signal control system using composite reward architecture based deep reinforcement learning
    Jamil, Abu Rafe Md
    Ganguly, Kishan Kumar
    Nower, Naushin
    [J]. IET INTELLIGENT TRANSPORT SYSTEMS, 2020, 14 (14) : 2030 - 2041
  • [8] Traffic signal control for smart cities using reinforcement learning
    Joo, Hyunjin
    Ahmed, Syed Hassan
    Lim, Yujin
    [J]. COMPUTER COMMUNICATIONS, 2020, 154 : 324 - 330
  • [9] Development and Evaluation of an Adaptive Traffic Signal Control Scheme Under a Mixed-Automated Traffic Scenario
    Kamal, Md Abdus Samad
    Hayakawa, Tomohisa
    Imura, Jun-ichi
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (02) : 590 - 602
  • [10] Reinforcement Learning for Joint Control of Traffic Signals in a Transportation Network
    Lee, Jincheol
    Chung, Jiyong
    Sohn, Keemin
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (02) : 1375 - 1387