Path planning via reinforcement learning with closed-loop motion control and field tests

被引:0
|
作者
Feher, Arpad [1 ]
Domina, Adam [2 ]
Bardos, Adam [2 ]
Aradi, Szilard [1 ]
Becsi, Tamas [1 ]
机构
[1] Budapest Univ Technol & Econ, Fac Transportat Engn & Vehicle Engn, Dept Control Transportat & Vehicle Syst, Muegyet Rkp 3, H-1111 Budapest, Hungary
[2] Budapest Univ Technol & Econ, Dept Automot Technol, Fac Transportat Engn & Vehicle Engn, Muegyetem Rkp 3, H-1111 Budapest, Hungary
关键词
Vehicle dynamics; Advanced driver assistance systems; Machine learning; Reinforcement learning; Model predictive control; ACTIVE STEERING CONTROL; MODEL; SIMULATION; VEHICLES;
D O I
10.1016/j.engappai.2024.109870
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Performing evasive maneuvers with highly automated vehicles is a challenging task. The algorithm must fulfill safety constraints and complete the task while keeping the car in a controllable state. Furthermore, considering all aspects of vehicle dynamics, the path generation problem is numerically complex. Hence its classical solutions can hardly meet real-time requirements. On the other hand, single reinforcement learning based approaches only could handle this problem as a simple driving task and would not provide feasibility information on the whole task's horizon. Therefore, this paper presents a hierarchical method for obstacle avoidance of an automated vehicle to overcome this issue, where the geometric path generation is provided by a single-step continuous Reinforcement Learning agent, while a model-predictive controller deals with lateral control to perform a double lane change maneuver. As the agent plays the optimization role in this architecture, it is trained in various scenarios to provide the necessary parameters fora geometric path generator in a onestep neural network output. During the training, the controller that follows the track evaluates the feasibility of the generated path whose performance metrics provide feedback to the agent so it can further improve its performance. The framework can train an agent fora given problem with various parameters. Asa use case, it is presented as a static obstacle avoidance maneuver. the proposed framework was tested on an automotive proving ground with the geometric constraints of the ISO-3888-2 test. The results proved its real-time capability and performance compared to human drivers' abilities.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] UAV swarm path planning with reinforcement learning for field prospecting
    Alejandro Puente-Castro
    Daniel Rivero
    Alejandro Pazos
    Enrique Fernandez-Blanco
    Applied Intelligence, 2022, 52 : 14101 - 14118
  • [32] Enhancing Closed-Loop Performance in Learning-Based Vehicle Motion Planning by Integrating Rule-Based Insights
    Wang, Yunkai
    Kong, Quyu
    Zhu, He
    Zhang, Dongkun
    Lin, Longzhong
    Sha, Hao
    Xia, Xunlong
    Liang, Qiao
    Deng, Bing
    Chen, Ken
    Xiong, Rong
    Wang, Yue
    Ye, Jieping
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7915 - 7922
  • [33] Closed-Loop Control of Plasma Osmolality
    Zaarouri, Kamel
    Haidar, Ahmad
    Boulet, Benoit
    BRAIN, BODY AND MACHINE, 2010, 83 : 217 - 225
  • [34] Closed-loop integration of planning, scheduling and multi-parametric nonlinear control
    Charitopoulos, Vassilis M.
    Papageorgiou, Lazaros G.
    Dua, Vivek
    COMPUTERS & CHEMICAL ENGINEERING, 2019, 122 : 172 - 192
  • [35] Learning to control the brain through adaptive closed-loop patterned stimulation
    Tafazoli, Sina
    MacDowell, Camden J.
    Che, Zongda
    Letai, Katherine C.
    Steinhardt, Cynthia R.
    Buschman, Timothy J.
    JOURNAL OF NEURAL ENGINEERING, 2020, 17 (05)
  • [36] Machine Learning-Driven Bioelectronics for Closed-Loop Control of Cells
    Selberg, John
    Jafari, Mohammad
    Mathews, Juanita
    Jia, Manping
    Pansodtee, Pattawong
    Dechiraju, Harika
    Wu, Chunxiao
    Cordero, Sergio
    Flora, Alexander
    Yonas, Nebyu
    Jannetty, Sophia
    Diberardinis, Miranda
    Teodorescu, Mircea
    Levin, Michael
    Gomez, Marcella
    Rolandi, Marco
    ADVANCED INTELLIGENT SYSTEMS, 2020, 2 (12)
  • [37] DiffTune-MPC: Closed-Loop Learning for Model Predictive Control
    Tao, Ran
    Cheng, Sheng
    Wang, Xiaofeng
    Wang, Shenlong
    Hovakimyan, Naira
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (08): : 7294 - 7301
  • [38] Closed-Loop Soft Robot Control Frameworks with Coordinated Policies Based on Reinforcement Learning and Proprioceptive Self-Sensing
    Ju, Hunpyo
    Cha, Baekdong
    Rus, Daniela
    Lee, Jongho
    ADVANCED FUNCTIONAL MATERIALS, 2023,
  • [39] Closed-Loop Soft Robot Control Frameworks with Coordinated Policies Based on Reinforcement Learning and Proprioceptive Self-Sensing
    Ju, Hunpyo
    Cha, Baekdong
    Rus, Daniela
    Lee, Jongho
    ADVANCED FUNCTIONAL MATERIALS, 2023, 33 (51)
  • [40] Strategic design for inventory and production planning in closed-loop hybrid systems
    Dev, Navin K.
    Shankar, Ravi
    Choudhary, Alok
    INTERNATIONAL JOURNAL OF PRODUCTION ECONOMICS, 2017, 183 : 345 - 353