Identify, Estimate and Bound the Uncertainty of Reinforcement Learning for Autonomous Driving

被引:13
作者
Zhou, Weitao [1 ]
Cao, Zhong [1 ]
Deng, Nanshan [1 ]
Jiang, Kun [1 ]
Yang, Diange [1 ]
机构
[1] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Uncertainty; Training data; Reinforcement learning; Training; Fitting; Neural networks; Closed box; Autonomous driving; reinforcement learning; trajectory planning;
D O I
10.1109/TITS.2023.3266885
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Deep reinforcement learning (DRL) has emerged as a promising approach for developing more intelligent autonomous vehicles (AVs). A typical DRL application on AVs is to train a neural network-based driving policy. However, the black-box nature of neural networks can result in unpredictable decision failures, making such AVs unreliable. To this end, this work proposes a method to identify and protect unreliable decisions of a DRL driving policy. The basic idea is to estimate and constrain the policy's performance uncertainty, which quantifies potential performance drop due to insufficient training data or network fitting errors. By constraining the uncertainty, the DRL model's performance is always greater than that of a baseline policy. The uncertainty caused by insufficient data is estimated by the bootstrapped method. Then, the uncertainty caused by the network fitting error is estimated using an ensemble network. Finally, a baseline policy is added as the performance lower bound to avoid potential decision failures. The overall framework is called uncertainty-bound reinforcement learning (UBRL). The proposed UBRL is evaluated on DRL policies with different amounts of training data, taking an unprotected left-turn driving case as an example. The result shows that the UBRL method can identify potentially unreliable decisions of DRL policy. The UBRL guarantees to outperform baseline policy even when the DRL policy is not well-trained and has high uncertainty. Meanwhile, the performance of UBRL improves with more training data. Such a method is valuable for the DRL application on real-road driving and provides a metric to evaluate a DRL policy.
引用
收藏
页码:7932 / 7942
页数:11
相关论文
共 37 条
[1]  
BECKMANN N, 1990, SIGMOD REC, V19, P322, DOI 10.1145/93605.98741
[2]   Trustworthy safety improvement for autonomous driving using reinforcement learning [J].
Cao, Zhong ;
Xu, Shaobing ;
Jiao, Xinyu ;
Peng, Huei ;
Yang, Diange .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2022, 138
[3]   Confidence-Aware Reinforcement Learning for Self-Driving Cars [J].
Cao, Zhong ;
Xu, Shaobing ;
Peng, Huei ;
Yang, Diange ;
Zidek, Robert .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) :7419-7430
[4]   Highway Exiting Planner for Automated Vehicles Using Reinforcement Learning [J].
Cao, Zhong ;
Yang, Diange ;
Xu, Shaobing ;
Peng, Huei ;
Li, Boqi ;
Feng, Shuo ;
Zhao, Ding .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (02) :990-1000
[5]   Obstacle Avoidance for Low-Speed Autonomous Vehicles With Barrier Function [J].
Chen, Yuxiao ;
Peng, Huei ;
Grizzle, Jessy .
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2018, 26 (01) :194-206
[6]  
Dabney W, 2018, PR MACH LEARN RES, V80
[7]  
Dosovitskiy A., 2017, P 1 ANN C ROBOT LEAR, P1, DOI DOI 10.48550/ARXIV.1711.03938
[8]   1977 RIETZ LECTURE - BOOTSTRAP METHODS - ANOTHER LOOK AT THE JACKKNIFE [J].
EFRON, B .
ANNALS OF STATISTICS, 1979, 7 (01) :1-26
[9]  
Gal Y, 2016, PR MACH LEARN RES, V48
[10]  
Ganaie M., 2021, ARXIV