Optimizing Traffic Signal Control in Mixed Traffic Scenarios: A Predictive Traffic Information-based Deep Reinforcement Learning Approach

被引:0
|
作者
Zhang, Zhengyang [1 ]
Zhou, Bin [1 ,2 ]
Zhang, Bugao [3 ]
Cheng, Ping [3 ]
Lee, Der-Horng [1 ]
Hu, Simon [1 ,2 ]
机构
[1] Zhejiang Univ, ZJU UIUC Inst, Haining 314400, Peoples R China
[2] Zhejiang Univ, Coll Civil Engn & Architecture, Hangzhou 310058, Peoples R China
[3] ENJOYOR Technol CO LTD, Hangzhou 310000, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep Reinforcement Learning; Connected Autonomous Vehicles; Intelligent Traffic Systems; Eco-Friendly; Traffic Signal Control;
D O I
10.1109/FISTS60717.2024.10485533
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
The rapid advancement of Connected Autonomous Vehicles (CAVs) is a driving force in the evolution of smart cities and Intelligent Transportation Systems (ITS). This has spurred extensive research in both fields, with a significant focus on vehicle-to-infrastructure (V2I) communication. Deep reinforcement learning is emerging as a popular method in this realm. However, current literature shows a significant gap in exploring the dynamics of traffic flow information for traffic signal control in a mixed traffic environment. Our research addresses this by introducing a predictive traffic information module. This module leverages historical traffic flow data to discern patterns at intersections, enabling proactive traffic signal control by anticipating future traffic states. Alongside this, we developed a reward function where agents, consisting of both traffic signals and CAVs, collaborate towards collective rewards. This strategy not only optimizes traffic signal control but also yields greater environmental benefits. Our experiments indicate that our method outperforms standard benchmarks at an isolated intersection, improving traffic efficiency and reducing environmental impacts by over 20% and 18%, respectively.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] A Novel Deep Reinforcement Learning Approach to Traffic Signal Control with Connected Vehicles
    Shi, Yang
    Wang, Zhenbo
    LaClair, Tim J.
    Wang, Chieh
    Shao, Yunli
    Yuan, Jinghui
    APPLIED SCIENCES-BASEL, 2023, 13 (04):
  • [22] A Regional Traffic Signal Control Strategy with Deep Reinforcement Learning
    Li, Congcong
    Yan, Fei
    Zhou, Yiduo
    Wu, Jia
    Wang, Xiaomin
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 7690 - 7695
  • [23] Optimization Control of Adaptive Traffic Signal with Deep Reinforcement Learning
    Cao, Kerang
    Wang, Liwei
    Zhang, Shuo
    Duan, Lini
    Jiang, Guiminx
    Sfarra, Stefano
    Zhang, Hai
    Jung, Hoekyung
    ELECTRONICS, 2024, 13 (01)
  • [24] A survey on deep reinforcement learning approaches for traffic signal control
    Zhao, Haiyan
    Dong, Chengcheng
    Cao, Jian
    Chen, Qingkui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [25] Digital-Twin-Based Deep Reinforcement Learning Approach for Adaptive Traffic Signal Control
    Kamal, Hani
    Yanez, Wendy
    Hassan, Sara
    Sobhy, Dalia
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 21946 - 21953
  • [26] Application of Deep Reinforcement Learning in Traffic Signal Control: An Overview and Impact of Open Traffic Data
    Greguric, Martin
    Vujic, Miroslav
    Alexopoulos, Charalampos
    Miletic, Mladen
    APPLIED SCIENCES-BASEL, 2020, 10 (11):
  • [27] Adaptive urban traffic signal control based on enhanced deep reinforcement learning
    Changjian Cai
    Min Wei
    Scientific Reports, 14 (1)
  • [28] Traffic Signal Control Optimization Based on Deep Reinforcement Learning with Attention Mechanisms
    Ni, Wenlong
    Wang, Peng
    Li, Zehong
    Li, Chuanzhuang
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT III, 2024, 14449 : 147 - 158
  • [29] Adaptive urban traffic signal control based on enhanced deep reinforcement learning
    Cai, Changjian
    Wei, Min
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [30] Minimize Pressure Difference Traffic Signal Control Based on Deep Reinforcement Learning
    Yu, Pengcheng
    Luo, Jie
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 5493 - 5498