HeuRL: A Heuristically Initialized Reinforcement Learning Method for Autonomous Driving Control Task

被引:0
|
作者
Xu, Jiaxuan [1 ]
Yuan, Jian [2 ]
机构
[1] Tsinghua Univ, Global Innovat Exchange Inst, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
来源
2018 INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTS (ICCR) | 2018年
关键词
robotic control; autonomous driving; artificial intelligence; reinforcement learning; simulator;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although reinforcement learning (RL) shows great intelligence in many simulation tasks, it hasn't been widely applied in real-world vehicle control tasks due to the simulation-to-real- world (Sim2Real) transferring difficulties. Vehicle models and road conditions in real world can be very different from those in simulators. As a result, the RL models trained by simulators usually fail and need to be trained again under a new environment, which is dangerous and time-consuming. Some hand-craft heuristic methods, by contrast, are independent of environmental characters and perform more reliably in an unfamiliar situation. In this paper, we introduce a heuristically initialized RL model (HeuRL), which sped up the learning convergence by 4 times and decreased the collisions by 90% during the training process under a new environment. The experiments were conducted in The Open Racing Car Simulator (TORCS), an open-source platform for real-time car racing simulation.
引用
收藏
页码:57 / 62
页数:6
相关论文
共 50 条
  • [21] Unity-Based Autonomous Driving Environment: A Platform for Validating Reinforcement Learning Agents
    Gonzalez-Santocildes, Asier
    Vazquez, Juan-Ignacio
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, PT II, HAIS 2024, 2025, 14858 : 280 - 291
  • [22] A driving profile recommender system for autonomous driving using sensor data and reinforcement learning
    Chronis, Christos
    Sardianos, Christos
    Varlamis, Iraklis
    Michail, Dimitrios
    Tserpes, Konstantinos
    25TH PAN-HELLENIC CONFERENCE ON INFORMATICS WITH INTERNATIONAL PARTICIPATION (PCI2021), 2021, : 33 - 38
  • [23] Market-Based Dynamic Task Allocation Using Heuristically Accelerated Reinforcement Learning
    Gurzoni, Jose Angelo, Jr.
    Tonidandel, Flavio
    Bianchi, Reinaldo A. C.
    PROGRESS IN ARTIFICIAL INTELLIGENCE-BOOK, 2011, 7026 : 365 - 376
  • [24] Reinforcement Learning-Based Path following Control with Dynamics Randomization for Parametric Uncertainties in Autonomous Driving
    Ahmic, Kenan
    Ultsch, Johannes
    Brembeck, Jonathan
    Winter, Christoph
    APPLIED SCIENCES-BASEL, 2023, 13 (06):
  • [25] Learning autonomous race driving with action mapping reinforcement learning
    Wang, Yuanda
    Yuan, Xin
    Sun, Changyin
    ISA TRANSACTIONS, 2024, 150 : 1 - 14
  • [26] Reinforcement Learning Based Speed Control with Creep Rate Constraints for Autonomous Driving of Mining Electric Locomotives
    Li, Ying
    Zhu, Zhencai
    Li, Xiaoqiang
    APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [27] RLAD: Reinforcement Learning From Pixels for Autonomous Driving in Urban Environments
    Coelho, Daniel
    Oliveira, Miguel
    Santos, Vitor
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (04) : 7427 - 7435
  • [28] Fear-Neuro-Inspired Reinforcement Learning for Safe Autonomous Driving
    He, Xiangkun
    Wu, Jingda
    Huang, Zhiyu
    Hu, Zhongxu
    Wang, Jun
    Sangiovanni-Vincentelli, Alberto
    Lv, Chen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (01) : 267 - 279
  • [29] Deep Reinforcement Learning for Autonomous Driving in Amazon Web Services DeepRacer
    Petryshyn, Bohdan
    Postupaiev, Serhii
    Ben Bari, Soufiane
    Ostreika, Armantas
    INFORMATION, 2024, 15 (02)
  • [30] A Comprehensive Survey on the Application of Deep and Reinforcement Learning Approaches in Autonomous Driving
    Ben Elallid, Badr
    Benamar, Nabil
    Hafid, Abdelhakim Senhaji
    Rachidi, Tajjeeddine
    Mrani, Nabil
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (09) : 7366 - 7390