Simulated CAVs Driving and Characteristics of the Mixed Traffic Using Reinforcement Learning Method

被引:4
作者
Guo, Jingqiu [1 ]
Liu, Yangzexi [1 ]
Fang, Shouen [1 ]
机构
[1] Tongji Univ, Minist Educ, Key Lab Rd & Traff Engn, Shanghai 200092, Peoples R China
来源
SMART TRANSPORTATION SYSTEMS 2019 | 2019年 / 149卷
关键词
MODEL;
D O I
10.1007/978-981-13-8683-1_20
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Using cooperative gaming method can be a promising approach to mimic various driving tasks in the field of automated driving. This paper presents a Deep Reinforcement Learning approach for modelling Connected and Automated Vehicles (CAVs) in heterogeneous traffic. First, the Gipps models were integrated into regular vehicle agent. Second, an enhanced Q-learning was employed as the modelling platform for CAVs, to strengthen the capability of the simulation system in realistically reproducing CAV lane-changing and car-following behaviour. Third, extensive simulation studies based on a two-lane highway stretch show that the inclusion of CAVs considerably improves traffic flow, mean speed, and traffic capacity. We also simulated managed lane policies to determine how CAVs should be distributed across lanes in various conditions. Such understanding is essential for research concerning CAV, as well as, the CAV implication for future traffic management.
引用
收藏
页码:193 / 204
页数:12
相关论文
共 15 条
  • [1] Anargiros I.D., 2018, MACROSCOPIC MULTILAN
  • [2] Towards vehicle automation: Roadway capacity formulation for traffic mixed with regular and automated vehicles
    Chen, Danjue
    Ahn, Soyoung
    Chitturi, Madhav
    Noyce, David A.
    [J]. TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2017, 100 : 196 - 221
  • [3] A mixed traffic capacity analysis and lane management model for connected automated vehicles: A Markov chain method
    Ghiasi, Amir
    Hussain, Omar
    Qian, Zhen
    Li, Xiaopeng
    [J]. TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2017, 106 : 266 - 292
  • [4] Driving Behaviour Style Study with a Hybrid Deep Learning Framework Based on GPS Data
    Guo, Jingqiu
    Liu, Yangzexi
    Zhang, Lanfang
    Wang, Yibing
    [J]. SUSTAINABILITY, 2018, 10 (07)
  • [5] Khan U, 2014, INT CONF CONNECT VEH, P565, DOI 10.1109/ICCVE.2014.7297612
  • [6] Kober J, 2010, IEEE ROBOT AUTOM MAG, V17, P55, DOI 10.1109/MRA.2010.936952
  • [7] Microscopic modeling of the relaxation phenomenon using a macroscopic lane-changing model
    Laval, Jorge A.
    Leclercq, Ludovic
    [J]. TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2008, 42 (06) : 511 - 522
  • [8] Refining Lane-Based Traffic Signal Settings to Satisfy Spatial Lane Length Requirements
    Liu, Yanping
    Wong, C. K.
    [J]. JOURNAL OF ADVANCED TRANSPORTATION, 2017,
  • [9] Human-level control through deep reinforcement learning
    Mnih, Volodymyr
    Kavukcuoglu, Koray
    Silver, David
    Rusu, Andrei A.
    Veness, Joel
    Bellemare, Marc G.
    Graves, Alex
    Riedmiller, Martin
    Fidjeland, Andreas K.
    Ostrovski, Georg
    Petersen, Stig
    Beattie, Charles
    Sadik, Amir
    Antonoglou, Ioannis
    King, Helen
    Kumaran, Dharshan
    Wierstra, Daan
    Legg, Shane
    Hassabis, Demis
    [J]. NATURE, 2015, 518 (7540) : 529 - 533
  • [10] NAGEL K, 1992, J PHYS I, V2, P2221, DOI 10.1051/jp1:1992277