Enhancing Car-Following Performance in Traffic Oscillations Using Expert Demonstration Reinforcement Learning

被引:3
|
作者
Li, Meng [1 ,2 ]
Li, Zhibin [1 ]
Cao, Zehong [3 ]
机构
[1] Southeast Univ, Sch Transportat, Nanjing 210096, Peoples R China
[2] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore 639798, Singapore
[3] Univ South Australia, STEM, Adelaide, SA 5095, Australia
基金
中国国家自然科学基金;
关键词
Training; Oscillators; Trajectory; Task analysis; Cloning; Safety; Databases; Expert demonstration; reinforcement learning; car-following control; traffic oscillation; ADAPTIVE CRUISE CONTROL; AUTOMATED VEHICLES; CONTROL STRATEGY; MODEL; IMPACT;
D O I
10.1109/TITS.2024.3368474
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Deep reinforcement learning (DRL) algorithms often face challenges in achieving stability and efficiency due to significant policy gradient variance and inaccurate reward function estimation in complex scenarios. This study addresses these issues in the context of multi-objective car-following control tasks with time lag in traffic oscillations. We propose an expert demonstration reinforcement learning (EDRL) approach that aims to stabilize training, accelerate learning, and enhance car-following performance. The key idea is to leverage expert demonstrations, which represent superior car-following control experiences, to improve the DRL policy. Our method involves two sequential steps. In the first step, expert demonstrations are obtained during offline pretraining by utilizing prior traffic knowledge, including car-following trajectories from an empirical database and classic car-following models. In the second step, expert demonstrations are obtained during online training, where the agent interacts with the car-following environment. The EDRL agents are trained through supervised regression on the expert demonstrations using the behavioral cloning technique. Experimental results conducted in various traffic oscillation scenarios demonstrate that our proposed method significantly enhances training stability, learning speed, and rewards compared to baseline algorithms.
引用
收藏
页码:7751 / 7766
页数:16
相关论文
共 50 条
  • [41] Car-Following Model of ULVs using a Deep Learning Model
    Inokuchi, Hiroaki
    Akiyama, Takamasa
    2022 JOINT 12TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 23RD INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS&ISIS), 2022,
  • [42] Study on car-following behaviors of combined traffic flow
    He, Min
    Rong, Jian
    Liu, Xiao-Ming
    Gongku Jiaotong Keji/Journal of Highway and Transportation Research and Development, 2002, 19 (03):
  • [43] An Intelligent Car-following Model Based on Multi-step Deep Reinforcement Learning
    Xing, Tongdi
    Zhang, Jiangyan
    Zhang, Tao
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 256 - 261
  • [44] A Bounded Rationality-Aware Car-Following Strategy for Alleviating Cut-In Events and Traffic Disturbances in Traffic Oscillations
    Li, Meng
    Li, Zhibin
    Wang, Bingtong
    Wang, Shunchao
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (11) : 17902 - 17916
  • [45] A simple stochastic car-following model for traffic flow
    Meng, Jian-ping
    Dong, Li-yun
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON NONLINEAR MECHANICS, 2007, : 1067 - 1070
  • [46] A Combined Reinforcement Learning and Model Predictive Control for Car-Following Maneuver of Autonomous Vehicles
    Liwen Wang
    Shuo Yang
    Kang Yuan
    Yanjun Huang
    Hong Chen
    Chinese Journal of Mechanical Engineering, 36
  • [47] A Combined Reinforcement Learning and Model Predictive Control for Car-Following Maneuver of Autonomous Vehicles
    Wang, Liwen
    Yang, Shuo
    Yuan, Kang
    Huang, Yanjun
    Chen, Hong
    CHINESE JOURNAL OF MECHANICAL ENGINEERING, 2023, 36 (01)
  • [48] Experimental analysis of car-following dynamics and traffic stability
    Ranjitkar, P
    Nakatsuji, T
    Kawamura, A
    TRAFFIC FLOW THEORY 2005, 2005, (1934): : 22 - 32
  • [49] Bilateral Deep Reinforcement Learning Approach for Better-than-human Car-following
    Shi, Tianyu
    Ai, Yifei
    ElSamadisy, Omar
    Abdulhai, Baher
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 3986 - 3992
  • [50] Car-following model of multispecies systems of road traffic
    Mason, AD
    Woods, AW
    PHYSICAL REVIEW E, 1997, 55 (03): : 2203 - 2214