Modeling the Effects of Autonomous Vehicles on Human Driver Car-Following Behaviors Using Inverse Reinforcement Learning

被引:22
|
作者
Wen, Xiao [1 ]
Jian, Sisi [2 ]
He, Dengbo [2 ,3 ]
机构
[1] Hong Kong Univ Sci & Technol HKUST, Interdisciplinary Programs Off IPO, Div Emerging Interdisciplinary Areas EMIA, Intelligent Transportat,Kowloon, Hong Kong, Peoples R China
[2] Hong Kong Univ Sci & Technol HKUST, Dept Civil & Environm Engn, Kowloon, Hong Kong, Peoples R China
[3] HKUST Guangzhou, Intelligent Transportat Thrust & Robot & Autonomou, Systems Hub, Guangzhou 511400, Guangdong, Peoples R China
关键词
Autonomous vehicles; car-following; vehicle trajectory; driver behavior; inverse reinforcement learning; deep reinforcement learning; VALIDATION; CALIBRATION;
D O I
10.1109/TITS.2023.3298150
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The development of autonomous driving technology will lead to a transition period during which human-driven vehicles (HVs) will share the road with autonomous vehicles (AVs). Understanding the interactions between AVs and HVs is critical for traffic safety and efficiency. Previous studies have used traffic/numerical simulations and field experiments to investigate HVs' behavioral changes when following AVs. However, such approaches simplify the actual scenarios and may result in biased results. Therefore, the objective of this study is to realistically model HV-following-AV dynamics and their microscopic interactions, which are important for intelligent transportation applications. HV-following-AV and HV-following-HV events are extracted from the high-resolution (10Hz) Waymo Open Dataset. Statistical test results reveal significant differences in calibrated intelligent driver model (IDM) parameters between HV-following-AV and HV-following-HV. An inverse reinforcement learning model (Inverse soft-Q Learning) is proposed to retrieve HVs' reward functions in HV-following-AV events. A deep reinforcement learning (DRL) approach -soft actor-critic (SAC) -is adopted to estimate the optimal policy for HVs when following AVs. The results show that, compared with other conventional and data-driven car-following models, the proposed model leads to significantly more accurate trajectory predictions. In addition, the recovered reward functions indicate that drivers' preferences when following AVs are different from those when following HVs.
引用
收藏
页码:13903 / 13915
页数:13
相关论文
共 50 条
  • [41] Improving Car-Following Control in Mixed Traffic: A Deep Reinforcement Learning Framework with Aggregated Human-Driven Vehicles
    Chen, Xianda
    Tiu, PakHin
    Zhang, Yihuai
    Zhu, Meixin
    Zheng, Xinhu
    Wang, Yinhai
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 627 - 632
  • [42] A LiDAR-assisted Smart Car-following Framework for Autonomous Vehicles
    Yi, Xianyong
    Ghazzai, Hakim
    Massoud, Yehia
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [43] Car-following Model of Connected and Autonomous Vehicles Considering Multiple Feedbacks
    Qin Y.-Y.
    Wang H.
    Ran B.
    Wang, Hao (haowang@seu.edu.cn), 2018, Science Press (18): : 48 - 54
  • [44] Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning
    Lee, Heeyun
    Kim, Kyunghyun
    Kim, Namwook
    Cha, Suk Won
    APPLIED ENERGY, 2022, 313
  • [45] Modeling Car-Following Behavior for Adaptive Cruise Control Vehicles
    Qin, Yanyan
    Wang, Hao
    CICTP 2019: TRANSPORTATION IN CHINA-CONNECTING THE WORLD, 2019, : 5613 - 5622
  • [46] Learning Car-Following Behaviors Using Bayesian Matrix Normal Mixture Regression
    Zhang, Chengyuan
    Chen, Kehua
    Zhu, Meixin
    Yang, Hai
    Sun, Lijun
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 608 - 613
  • [47] Driver Characteristics Oriented Autonomous Longitudinal Driving System in Car-Following Situation
    Kim, Haksu
    Min, Kyunghan
    Sunwoo, Myoungho
    SENSORS, 2020, 20 (21) : 1 - 17
  • [48] Trajectory Optimization of CAVs in Freeway Work Zone considering Car-Following Behaviors Using Online Multiagent Reinforcement Learning
    Zhu, Tong
    Li, Xiaohu
    Fan, Wei
    Wang, Changshuai
    Liu, Haoxue
    Zhao, Runqing
    JOURNAL OF ADVANCED TRANSPORTATION, 2021, 2021
  • [49] A Car-following Control Algorithm Based on Deep Reinforcement Learning
    Zhu B.
    Jiang Y.-D.
    Zhao J.
    Chen H.
    Deng W.-W.
    Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2019, 32 (06): : 53 - 60
  • [50] Towards robust car-following based on deep reinforcement learning
    Hart, Fabian
    Okhrin, Ostap
    Treiber, Martin
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 159