Modeling the Effects of Autonomous Vehicles on Human Driver Car-Following Behaviors Using Inverse Reinforcement Learning

被引:22
|
作者
Wen, Xiao [1 ]
Jian, Sisi [2 ]
He, Dengbo [2 ,3 ]
机构
[1] Hong Kong Univ Sci & Technol HKUST, Interdisciplinary Programs Off IPO, Div Emerging Interdisciplinary Areas EMIA, Intelligent Transportat,Kowloon, Hong Kong, Peoples R China
[2] Hong Kong Univ Sci & Technol HKUST, Dept Civil & Environm Engn, Kowloon, Hong Kong, Peoples R China
[3] HKUST Guangzhou, Intelligent Transportat Thrust & Robot & Autonomou, Systems Hub, Guangzhou 511400, Guangdong, Peoples R China
关键词
Autonomous vehicles; car-following; vehicle trajectory; driver behavior; inverse reinforcement learning; deep reinforcement learning; VALIDATION; CALIBRATION;
D O I
10.1109/TITS.2023.3298150
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The development of autonomous driving technology will lead to a transition period during which human-driven vehicles (HVs) will share the road with autonomous vehicles (AVs). Understanding the interactions between AVs and HVs is critical for traffic safety and efficiency. Previous studies have used traffic/numerical simulations and field experiments to investigate HVs' behavioral changes when following AVs. However, such approaches simplify the actual scenarios and may result in biased results. Therefore, the objective of this study is to realistically model HV-following-AV dynamics and their microscopic interactions, which are important for intelligent transportation applications. HV-following-AV and HV-following-HV events are extracted from the high-resolution (10Hz) Waymo Open Dataset. Statistical test results reveal significant differences in calibrated intelligent driver model (IDM) parameters between HV-following-AV and HV-following-HV. An inverse reinforcement learning model (Inverse soft-Q Learning) is proposed to retrieve HVs' reward functions in HV-following-AV events. A deep reinforcement learning (DRL) approach -soft actor-critic (SAC) -is adopted to estimate the optimal policy for HVs when following AVs. The results show that, compared with other conventional and data-driven car-following models, the proposed model leads to significantly more accurate trajectory predictions. In addition, the recovered reward functions indicate that drivers' preferences when following AVs are different from those when following HVs.
引用
收藏
页码:13903 / 13915
页数:13
相关论文
共 50 条
  • [1] Modeling Human Driver Behaviors When Following Autonomous Vehicles: An Inverse Reinforcement Learning Approach
    Wen, Xiao
    Jian, Sisi
    He, Dengbo
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 1375 - 1380
  • [2] Velocity control in car-following behavior with autonomous vehicles using reinforcement learning
    Wang, Zhe
    Huang, Helai
    Tang, Jinjun
    Meng, Xianwei
    Hu, Lipeng
    ACCIDENT ANALYSIS AND PREVENTION, 2022, 174
  • [3] Reinforcement Learning-based Car-Following Control for Autonomous Vehicles with OTFS
    Liu, Yulin
    Shi, Yuye
    Zhang, Xiaoqi
    Wu, Jun
    Yang, Songyuan
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [4] Modeling of Car-Following Behaviors Considering Driver's Fuzzy Perception Using Deep Learning
    Li, Linbo
    Li, Ruijie
    Zou, Yajie
    Tongji Daxue Xuebao/Journal of Tongji University, 2021, 49 (03): : 360 - 369
  • [5] A Combined Reinforcement Learning and Model Predictive Control for Car-Following Maneuver of Autonomous Vehicles
    Liwen Wang
    Shuo Yang
    Kang Yuan
    Yanjun Huang
    Hong Chen
    Chinese Journal of Mechanical Engineering, 36
  • [6] A Combined Reinforcement Learning and Model Predictive Control for Car-Following Maneuver of Autonomous Vehicles
    Wang, Liwen
    Yang, Shuo
    Yuan, Kang
    Huang, Yanjun
    Chen, Hong
    CHINESE JOURNAL OF MECHANICAL ENGINEERING, 2023, 36 (01)
  • [7] A Combined Reinforcement Learning and Model Predictive Control for Car-Following Maneuver of Autonomous Vehicles
    Liwen Wang
    Shuo Yang
    Kang Yuan
    Yanjun Huang
    Hong Chen
    Chinese Journal of Mechanical Engineering, 2023, (03) : 330 - 340
  • [8] Car-Following Behavior Modeling With Maximum Entropy Deep Inverse Reinforcement Learning
    Nan, Jiangfeng
    Deng, Weiwen
    Zhang, Ruzheng
    Zhao, Rui
    Wang, Ying
    Ding, Juan
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (02): : 3998 - 4010
  • [9] Driver Car-Following Model Based on Deep Reinforcement Learning
    Guo J.
    Li W.
    Luo Y.
    Chen T.
    Li K.
    Guo, Jinghua (guojh@xmu.edu.cn), 1600, SAE-China (43): : 571 - 579
  • [10] Human-like autonomous car-following model with deep reinforcement learning
    Zhu, Meixin
    Wang, Xuesong
    Wang, Yinhai
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2018, 97 : 348 - 368