Using Reinforcement Learning to Minimize the Probability of Delay Occurrence in Transportation

被引:47
作者
Cao, Zhiguang [1 ]
Guo, Hongliang [2 ]
Song, Wen [3 ]
Gao, Kaizhou [4 ]
Chen, Zhenghua [5 ]
Zhang, Le [5 ]
Zhang, Xuexi [6 ]
机构
[1] Natl Univ Singapore, Dept Ind Syst Engn & Management, Singapore 119077, Singapore
[2] Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 610051, Peoples R China
[3] Shandong Univ, Inst Marine Sci & Technol, Jinan 250300, Peoples R China
[4] Macau Univ Sci & Technol, Macau Inst Syst Engn, Taipa, Macao, Peoples R China
[5] Inst Infocomm Res I2R, Singapore 138632, Singapore
[6] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
基金
新加坡国家研究基金会; 中国国家自然科学基金;
关键词
Reinforcement learning; transportation; arriving on time; vehicle routing; Q-learning; SHORTEST-PATH PROBLEM; TRAFFIC ASSIGNMENT; GAME; FRAMEWORK; COST;
D O I
10.1109/TVT.2020.2964784
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Reducing traffic delay is of crucial importance for the development of sustainable transportation systems, which is a challenging task in the studies of stochastic shortest path (SSP) problem. Existing methods based on the probability tail model to solve the SSP problem, seek for the path that minimizes the probability of delay occurrence, which is equal to maximizing the probability of reaching the destination before a deadline (i.e., arriving on time). However, they suffer from low accuracy or high computational cost. Therefore, we design a novel and practical Q-learning approach where the converged Q-values have the practical meaning as the actual probabilities of arriving on time so as to improve the accuracy of finding the real optimal path. By further adopting dynamic neural networks to learn the value function, our approach can scale well to large road networks with arbitrary deadlines. Moreover, our approach is flexible to implement in a time dependent manner, which further improves the performance of returned path. Experimental results on some road networks with real mobility data, such as Beijing, Munich and Singapore, demonstrate the significant advantages of the proposed approach over other methods.
引用
收藏
页码:2424 / 2436
页数:13
相关论文
共 48 条
[1]  
[Anonymous], 2019, P 7 INT C LEARN REPR
[2]  
[Anonymous], [No title captured]
[3]  
[Anonymous], 2017, P 5 INT C LEARN REPR
[4]   A heuristic search approach for a nonstationary stochastic shortest path problem with terminal cost [J].
Bander, JL ;
White, CC .
TRANSPORTATION SCIENCE, 2002, 36 (02) :218-230
[5]   Risk-averse user equilibrium traffic assignment: an application of game theory [J].
Bell, MGH ;
Cassir, C .
TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2002, 36 (08) :671-681
[6]   An Accurate Solution to the Cardinality-Based Punctuality Problem [J].
Cao, Zhiguang ;
Wu, Yaoxin ;
Rao, Akshay ;
Klanner, Felix ;
Erschen, Stefan ;
Chen, Wei ;
Zhang, Le ;
Guo, Hongliang .
IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2020, 12 (04) :78-91
[7]   A Multiagent-Based Approach for Vehicle Routing by Considering Both Arriving on Time and Total Travel Time [J].
Cao, Zhiguang ;
Guo, Hongliang ;
Zhang, Jie .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2018, 9 (03)
[8]  
Cao ZG, 2017, AAAI CONF ARTIF INTE, P4481
[9]   A Unified Framework for Vehicle Rerouting and Traffic Light Control to Reduce Traffic Congestion [J].
Cao, Zhiguang ;
Jiang, Siwei ;
Zhang, Jie ;
Guo, Hongliang .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2017, 18 (07) :1958-1973
[10]  
Cao ZG, 2016, AAAI CONF ARTIF INTE, P3814