Model-Based Imitation Learning for Urban Driving

被引:0
|
作者
Hu, Anthony [1 ,2 ]
Corrado, Gianluca [1 ]
Griffiths, Nicolas [1 ]
Murez, Zak [1 ]
Gurau, Corina [1 ]
Yeo, Hudson [1 ]
Kendall, Alex [1 ]
Cipolla, Roberto [2 ]
Shotton, Jamie [1 ]
机构
[1] Wayve, London, England
[2] Univ Cambridge, Cambridge, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An accurate model of the environment and the dynamic agents acting in it offers great potential for improving motion planning. We present MILE: a Model-based Imitation LEarning approach to jointly learn a model of the world and a policy for autonomous driving. Our method leverages 3D geometry as an inductive bias and learns a highly compact latent space directly from high-resolution videos of expert demonstrations. Our model is trained on an offline corpus of urban driving data, without any online interaction with the environment. MILE improves upon prior state-of-the-art by 31% in driving score on the CARLA simulator when deployed in a completely new town and new weather conditions. Our model can predict diverse and plausible states and actions, that can be interpretably decoded to bird's-eye view semantic segmentation. Further, we demonstrate that it can execute complex driving manoeuvres from plans entirely predicted in imagination. Our approach is the first camera-only method that models static scene, dynamic scene, and ego-behaviour in an urban driving environment. The code and model weights are available at https://github.com/wayveai/mile.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Model-Based Robot Imitation with Future Image Similarity
    A. Wu
    A. J. Piergiovanni
    M. S. Ryoo
    International Journal of Computer Vision, 2020, 128 : 1360 - 1374
  • [22] Comparison of Control Methods Based on Imitation Learning for Autonomous Driving
    Gao, Yinfeng
    Liu, Yuqi
    Zhang, Qichao
    Wang, Yu
    Zhao, Dongbin
    Ding, Dawei
    Pang, Zhonghua
    Zhang, Yueming
    2019 TENTH INTERNATIONAL CONFERENCE ON INTELLIGENT CONTROL AND INFORMATION PROCESSING (ICICIP), 2019, : 274 - 281
  • [23] Uncertainty-Aware Model-Based Offline Reinforcement Learning for Automated Driving
    Diehl, Christopher
    Sievernich, Timo Sebastian
    Kruger, Martin
    Hoffmann, Frank
    Bertram, Torsten
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (02) : 1167 - 1174
  • [24] Integrating Deep Reinforcement Learning with Model-based Path Planners for Automated Driving
    Yurtsever, Ekim
    Capito, Linda
    Redmill, Keith
    Ozguner, Umit
    2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2020, : 1311 - 1316
  • [25] Model-Based Reinforcement Learning for Eco-Driving Control of Electric Vehicles
    Lee, Heeyun
    Kim, Namwook
    Cha, Suk Won
    IEEE ACCESS, 2020, 8 : 202886 - 202896
  • [26] Full Vehicle Trajectory Planning Model for Urban Traffic Control Based on Imitation Learning
    Ying, Jun
    Feng, Yiheng
    TRANSPORTATION RESEARCH RECORD, 2022, 2676 (07) : 186 - 198
  • [27] An end-to-end learning of driving strategies based on DDPG and imitation learning
    Zou, Qijie
    Xiong, Kang
    Hou, Yingli
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 3190 - 3195
  • [28] Imitation learning for agile autonomous driving
    Pan, Yunpeng
    Cheng, Ching-An
    Saigol, Kamil
    Lee, Keuntaek
    Yan, Xinyan
    Theodorou, Evangelos A.
    Boots, Byron
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (2-3): : 286 - 302
  • [29] Generative Adversarial Imitation Learning for End-to-End Autonomous Driving on Urban Environments
    Karl Couto, Gustavo Claudio
    Antonelo, Eric Aislan
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [30] Correction to: Model-Based Robot Imitation with Future Image Similarity
    A. Wu
    A. J. Piergiovanni
    M. S. Ryoo
    International Journal of Computer Vision, 2020, 128 : 1375 - 1375