Value-based deep reinforcement learning for adaptive isolated intersection signal control

被引:74
作者
Wan, Chia-Hao [1 ]
Hwang, Ming-Chorng [1 ]
机构
[1] China Engn Consultants Inc, ITS Res Ctr, 28F,185,Sec 2,Sinhai Rd, Taipei, Taiwan
关键词
traffic engineering computing; learning (artificial intelligence); neural nets; road traffic; iterative methods; dynamic programming; value-based deep reinforcement learning; adaptive isolated intersection signal control; road network efficiency improvement; advanced traffic signal control methods; intelligent transportation systems; smart city; modern city; artificial intelligence; machine learning-based framework; deep Q-learning neural network; model-free technique; optimal discrete-time action selection problems; variable green time; traffic fluctuations; dynamic discount factor; iterative Bellman equation; biased action-value function estimation; VISSIM software; traffic arrival rates; traffic arrival patterns;
D O I
10.1049/iet-its.2018.5170
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Under efficiency improvement of road networks by utilizing advanced traffic signal control methods, intelligent transportation systems intend to characterize a smart city. Recently, due to significant progress in artificial intelligence, machine learning-based framework of adaptive traffic signal control has been highly concentrated. In particular, deep Q-learning neural network is a model-free technique and can be applied to optimal action selection problems. However, setting variable green time is a key mechanism to reflect traffic fluctuations such that time steps need not be fixed intervals in reinforcement learning framework. In this study, the authors proposed a dynamic discount factor embedded in the iterative Bellman equation to prevent from a biased estimation of action-value function due to the effects of inconstant time step interval. Moreover, action is added to the input layer of the neural network in the training process, and the output layer is the estimated action-value for the denoted action. Then, the trained neural network can be used to generate action that leads to an optimal estimated value within a finite set as the agents' policy. The preliminary results show that the trained agent outperforms a fixed timing plan in all testing cases with reducing system total delay by 20%..
引用
收藏
页码:1005 / 1010
页数:6
相关论文
共 21 条
[1]   Reinforcement learning: Introduction to theory and potential for transport applications [J].
Abdulhai, B ;
Kattan, L .
CANADIAN JOURNAL OF CIVIL ENGINEERING, 2003, 30 (06) :981-991
[2]   Reinforcement learning for True Adaptive traffic signal control [J].
Abdulhai, B ;
Pringle, R ;
Karakoulas, GJ .
JOURNAL OF TRANSPORTATION ENGINEERING, 2003, 129 (03) :278-285
[3]  
[Anonymous], 2000, P MACHINE LEARNING
[4]  
[Anonymous], 2007, DYNAMIC PROGRAMMING
[5]  
[Anonymous], 2013, Playing atari with deep reinforcement learning
[6]  
[Anonymous], 1958, 39 GREAT BRIT ROAD R
[7]  
[Anonymous], 2015, Reinforcement Learning: An Introduction
[8]   Learning Deep Architectures for AI [J].
Bengio, Yoshua .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 2 (01) :1-127
[10]  
Glorot X., 2010, P 13 INT C ART INT S, P249