Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement Learning

被引:38
|
作者
Cai, Peide [1 ]
Wang, Hengli [1 ]
Huang, Huaiyang [1 ]
Liu, Yuxuan [1 ]
Liu, Ming [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
关键词
Reinforcement learning; imitation learning; model learning for control; autonomous racing; uncertainty awareness;
D O I
10.1109/LRA.2021.3097345
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Autonomous car racing is a challenging task in the robotic control area. Traditional modular methods require accurate mapping, localization and planning, which makes them computationally inefficient and sensitive to environmental changes. Recently, deep-learning-based end-to-end systems have shown promising results for autonomous driving/racing. However, they are commonly implemented by supervised imitation learning (IL), which suffers from the distribution mismatch problem, or by reinforcement learning (RL), which requires a huge amount of risky interaction data. In this work, we present a general deep imitative reinforcement learning approach (DIRL), which successfully achieves agile autonomous racing using visual inputs. The driving knowledge is acquired from both IL and model-based RL, where the agent can learn from human teachers as well as perform self-improvement by safely interacting with an offline world model. We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation. The evaluation results demonstrate that our method outperforms previous IL and RL methods in terms of sample efficiency and task performance. Demonstration videos are available at https://caipeide.github.io/autorace-dirl/.
引用
收藏
页码:7262 / 7269
页数:8
相关论文
共 50 条
  • [1] Vision-based control in the open racing car simulator with deep and reinforcement learning
    Zhu Y.
    Zhao D.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (12) : 15673 - 15685
  • [2] Autonomous Car Racing in Simulation Environment Using Deep Reinforcement Learning
    Guckiran, Kivanc
    Bolat, Bulent
    2019 INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS CONFERENCE (ASYU), 2019, : 329 - 334
  • [3] Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone Racing
    Fu, Jiawei
    Song, Yunlong
    Wu, Yan
    Yu, Fisher
    Scaramuzza, Davide
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 5243 - 5250
  • [4] Autonomous Landing on a Moving Platform Using Vision-Based Deep Reinforcement Learning
    Ladosz, Pawel
    Mammadov, Meraj
    Shin, Heejung
    Shin, Woojae
    Oh, Hyondong
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4575 - 4582
  • [5] Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning
    Ejaz, Muhammad Mudassir
    Tang, Tong Boon
    Lu, Cheng-Kai
    IEEE SENSORS JOURNAL, 2021, 21 (02) : 2230 - 2240
  • [6] Vision-based Navigation Using Deep Reinforcement Learning
    Kulhanek, Jonas
    Derner, Erik
    de Bruin, Tim
    Babuska, Robert
    2019 EUROPEAN CONFERENCE ON MOBILE ROBOTS (ECMR), 2019,
  • [7] Towards monocular vision-based autonomous flight through deep reinforcement learning
    Kim, Minwoo
    Kim, Jongyun
    Jung, Minjae
    Oh, Hyondong
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 198
  • [8] Vision-Based Robotic Arm Control Algorithm Using Deep Reinforcement Learning for Autonomous Objects Grasping
    Sekkat, Hiba
    Tigani, Smail
    Saadane, Rachid
    Chehri, Abdellah
    APPLIED SCIENCES-BASEL, 2021, 11 (17):
  • [9] CIRL: Controllable Imitative Reinforcement Learning for Vision-Based Self-driving
    Liang, Xiaodan
    Wang, Tairui
    Yang, Luona
    Xing, Eric
    COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 604 - 620
  • [10] A Deep Reinforcement Learning Technique for Vision-Based Autonomous Multirotor Landing on a Moving Platform
    Rodriguez-Ramos, Alejandro
    Sampedro, Carlos
    Bavle, Hriday
    Gil Moreno, Ignacio
    Campoy, Pascual
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1010 - 1017