Reinforcement Learning With Function Approximation for Traffic Signal Control

被引:233
作者
Prashanth, L. A. [1 ]
Bhatnagar, Shalabh [1 ]
机构
[1] Indian Inst Sci, Dept Comp Sci & Automat, Bangalore 560012, Karnataka, India
关键词
Q-learning with full-state representation (QTLC-FS); Q-learning with function approximation (QTLC-FA); reinforcement learning (RL); traffic signal control; REAL-TIME; NETWORKS; DESIGN;
D O I
10.1109/TITS.2010.2091408
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
We propose, for the first time, a reinforcement learning (RL) algorithm with function approximation for traffic signal control. Our algorithm incorporates state-action features and is easily implementable in high-dimensional settings. Prior work, e. g., the work of Abdulhai et al., on the application of RL to traffic signal control requires full-state representations and cannot be implemented, even in moderate-sized road networks, because the computational complexity exponentially grows in the numbers of lanes and junctions. We tackle this problem of the curse of dimensionality by effectively using feature-based state representations that use a broad characterization of the level of congestion as low, medium, or high. One advantage of our algorithm is that, unlike prior work based on RL, it does not require precise information on queue lengths and elapsed times at each lane but instead works with the aforementioned described features. The number of features that our algorithm requires is linear to the number of signaled lanes, thereby leading to several orders of magnitude reduction in the computational complexity. We perform implementations of our algorithm on various settings and show performance comparisons with other algorithms in the literature, including the works of Abdulhai et al. and Cools et al., as well as the fixed-timing and the longest queue algorithms. For comparison, we also develop an RL algorithm that uses full-state representation and incorporates prioritization of traffic, unlike the work of Abdulhai et al. We observe that our algorithm outperforms all the other algorithms on all the road network settings that we consider.
引用
收藏
页码:412 / 421
页数:10
相关论文
共 29 条
[1]   Reinforcement learning for True Adaptive traffic signal control [J].
Abdulhai, B ;
Pringle, R ;
Karakoulas, GJ .
JOURNAL OF TRANSPORTATION ENGINEERING, 2003, 129 (03) :278-285
[2]   Design and evaluation of dynamic traffic management strategies for congested conditions [J].
Abu-Lebdeh, G ;
Benekohal, RF .
TRANSPORTATION RESEARCH PART A-POLICY AND PRACTICE, 2003, 37 (02) :109-127
[3]  
Bertsekas D. P., 1996, Optimization and neural computation series, V3
[4]   Natural actor-critic algorithms [J].
Bhatnagar, Shalabh ;
Sutton, Richard S. ;
Ghavamzadeh, Mohammad ;
Lee, Mark .
AUTOMATICA, 2009, 45 (11) :2471-2482
[5]  
Chin D. C., 1999, Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251), P2188, DOI 10.1109/ACC.1999.786341
[6]  
Cools SB, 2008, ADV INFORM KNOWL PRO, P41, DOI 10.1007/978-1-84628-982-8_3
[7]   Using genetic algorithms to design signal coordination for oversaturated networks [J].
Girianna, M ;
Benekohal, RF .
ITS JOURNAL, 2004, 8 (02) :117-129
[8]   Distributed Geometric Fuzzy Multiagent Urban Traffic Signal Control [J].
Gokulan, Balaji Parasumanna ;
Srinivasan, Dipti .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2010, 11 (03) :714-727
[9]   On actor-critic algorithms [J].
Konda, VR ;
Tsitsiklis, JN .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2003, 42 (04) :1143-1166
[10]   Real-Time Measurement of Link Vehicle Count and Travel Time in a Road Network [J].
Kwong, Karric ;
Kavaler, Robert ;
Rajagopal, Ram ;
Varaiya, Pravin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2010, 11 (04) :814-825