MetaDrive: Composing Diverse Driving Scenarios for Generalizable Reinforcement Learning

被引:76
作者
Li, Quanyi [1 ]
Peng, Zhenghao [2 ]
Feng, Lan [4 ]
Zhang, Qihang [3 ]
Xue, Zhenghai [3 ]
Zhou, Bolei [5 ]
机构
[1] Chinese Univ Hong Kong, Ctr Perceptual & Interact Intelligence, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[3] Chinese Univ Hong Kong, Dept Informat Engn, Hong Kong, Peoples R China
[4] Swiss Fed Inst Technol, CH-8092 Zurich, Switzerland
[5] Univ Calif Los Angeles, Los Angeles, CA 90095 USA
关键词
Task analysis; Roads; Reinforcement learning; Benchmark testing; Training; Safety; Autonomous vehicles; autonomous driving; simulation;
D O I
10.1109/TPAMI.2022.3190471
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Driving safely requires multiple capabilities from human and intelligent agents, such as the generalizability to unseen environments, the safety awareness of the surrounding traffic, and the decision-making in complex multi-agent settings. Despite the great success of Reinforcement Learning (RL), most of the RL research works investigate each capability separately due to the lack of integrated environments. In this work, we develop a new driving simulation platform called MetaDrive to support the research of generalizable reinforcement learning algorithms for machine autonomy. MetaDrive is highly compositional, which can generate an infinite number of diverse driving scenarios from both the procedural generation and the real data importing. Based on MetaDrive, we construct a variety of RL tasks and baselines in both single-agent and multi-agent settings, including benchmarking generalizability across unseen scenes, safe exploration, and learning multi-agent traffic. The generalization experiments conducted on both procedurally generated scenarios and real-world scenarios show that increasing the diversity and the size of the training set leads to the improvement of the RL agent's generalizability. We further evaluate various safe reinforcement learning and multi-agent reinforcement learning algorithms in MetaDrive environments and provide the benchmarks. Source code, documentation, and demo video are available at https://metadriverse.github.io/metadrive.
引用
收藏
页码:3461 / 3475
页数:15
相关论文
共 61 条
[21]   General lane-changing model MOBIL for car-following models [J].
Kesting, Arne ;
Treiber, Martin ;
Helbing, Dirk .
TRANSPORTATION RESEARCH RECORD, 2007, (1999) :86-94
[22]  
Kothari P, 2021, Arxiv, DOI arXiv:2111.06889
[23]  
Kumar A, 2020, Arxiv, DOI arXiv:2006.04779
[24]  
Leurent Edouard, 2018, An environment for autonomous driving decision-making
[25]  
Li C., 2022, C ROBOT LEARNING, P455
[26]  
Li Q., 2022, PROC INT C REPRESENT
[27]  
Liang E, 2018, PR MACH LEARN RES, V80
[28]  
Lopez PA, 2018, IEEE INT C INTELL TR, P2575, DOI 10.1109/ITSC.2018.8569938
[29]  
Martinez M, 2017, Arxiv, DOI arXiv:1712.01397
[30]   A review of reinforcement learning for autonomous building energy management [J].
Mason, Karl ;
Grijalva, Santiago .
COMPUTERS & ELECTRICAL ENGINEERING, 2019, 78 :300-312