MetaDrive: Composing Diverse Driving Scenarios for Generalizable Reinforcement Learning

被引:48
|
作者
Li, Quanyi [1 ]
Peng, Zhenghao [2 ]
Feng, Lan [4 ]
Zhang, Qihang [3 ]
Xue, Zhenghai [3 ]
Zhou, Bolei [5 ]
机构
[1] Chinese Univ Hong Kong, Ctr Perceptual & Interact Intelligence, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[3] Chinese Univ Hong Kong, Dept Informat Engn, Hong Kong, Peoples R China
[4] Swiss Fed Inst Technol, CH-8092 Zurich, Switzerland
[5] Univ Calif Los Angeles, Los Angeles, CA 90095 USA
关键词
Task analysis; Roads; Reinforcement learning; Benchmark testing; Training; Safety; Autonomous vehicles; autonomous driving; simulation;
D O I
10.1109/TPAMI.2022.3190471
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Driving safely requires multiple capabilities from human and intelligent agents, such as the generalizability to unseen environments, the safety awareness of the surrounding traffic, and the decision-making in complex multi-agent settings. Despite the great success of Reinforcement Learning (RL), most of the RL research works investigate each capability separately due to the lack of integrated environments. In this work, we develop a new driving simulation platform called MetaDrive to support the research of generalizable reinforcement learning algorithms for machine autonomy. MetaDrive is highly compositional, which can generate an infinite number of diverse driving scenarios from both the procedural generation and the real data importing. Based on MetaDrive, we construct a variety of RL tasks and baselines in both single-agent and multi-agent settings, including benchmarking generalizability across unseen scenes, safe exploration, and learning multi-agent traffic. The generalization experiments conducted on both procedurally generated scenarios and real-world scenarios show that increasing the diversity and the size of the training set leads to the improvement of the RL agent's generalizability. We further evaluate various safe reinforcement learning and multi-agent reinforcement learning algorithms in MetaDrive environments and provide the benchmarks. Source code, documentation, and demo video are available at https://metadriverse.github.io/metadrive.
引用
收藏
页码:3461 / 3475
页数:15
相关论文
共 50 条
  • [21] Augmenting Reinforcement Learning With Transformer-Based Scene Representation Learning for Decision-Making of Autonomous Driving
    Liu, Haochen
    Huang, Zhiyu
    Mo, Xiaoyu
    Lv, Chen
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (03): : 4405 - 4421
  • [22] Safe Reinforcement Learning in Autonomous Driving With Epistemic Uncertainty Estimation
    Zhang, Zheng
    Liu, Qi
    Li, Yanjie
    Lin, Ke
    Li, Linyu
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 13653 - 13666
  • [23] Safety-based Reinforcement Learning Longitudinal Decision for Autonomous Driving in Crosswalk Scenarios
    Xiong, Fangzhou
    Ren, Dongchun
    Fan, Mingyu
    Ding, Shuguang
    Liu, Zhiyong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [24] Exploiting Multi-Modal Fusion for Urban Autonomous Driving Using Latent Deep Reinforcement Learning
    Khalil, Yasser H.
    Mouftah, Hussein T.
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (03) : 2921 - 2935
  • [25] Task-Driven Autonomous Driving: Balanced Strategies Integrating Curriculum Reinforcement Learning and Residual Policy
    Shi, Jiamin
    Zhang, Tangyike
    Zong, Ziqi
    Chen, Shitao
    Xin, Jingmin
    Zheng, Nanning
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (11): : 9454 - 9461
  • [26] Multi-Input Autonomous Driving Based on Deep Reinforcement Learning With Double Bias Experience Replay
    Cui, Jianping
    Yuan, Liang
    He, Li
    Xiao, Wendong
    Ran, Teng
    Zhang, Jianbo
    IEEE SENSORS JOURNAL, 2023, 23 (11) : 11253 - 11261
  • [27] Robustness and Adaptability of Reinforcement Learning-Based Cooperative Autonomous Driving in Mixed-Autonomy Traffic
    Valiente, Rodolfo
    Toghi, Behrad
    Pedarsani, Ramtin
    Fallah, Yaser P.
    IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 3 : 397 - 410
  • [28] UNMAS: Multiagent Reinforcement Learning for Unshaped Cooperative Scenarios
    Chai, Jiajun
    Li, Weifan
    Zhu, Yuanheng
    Zhao, Dongbin
    Ma, Zhe
    Sun, Kewu
    Ding, Jishiyu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (04) : 2093 - 2104
  • [29] Composing Synergistic Macro Actions for Reinforcement Learning Agents
    Chen, Yu-Ming
    Chang, Kaun-Yu
    Liu, Chien
    Hsiao, Tsu-Ching
    Hong, Zhang-Wei
    Lee, Chun-Yi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (05) : 7251 - 7258
  • [30] Socially Intelligent Reinforcement Learning for Optimal Automated Vehicle Control in Traffic Scenarios
    Taghavifar, Hamid
    Wei, Chongfeng
    Taghavifar, Leyla
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 129 - 140