Adaptive Traffic Signal Control Method Based on Offline Reinforcement Learning

被引:2
作者
Wang, Lei [1 ]
Wang, Yu-Xuan [1 ]
Li, Jian-Kang [2 ]
Liu, Yi [2 ]
Pi, Jia-Tian [1 ]
机构
[1] Chongqing Normal Univ, Natl Ctr Appl Math Chongqing, Chongqing 401331, Peoples R China
[2] Chongqing Normal Univ, Sch Comp & Informat Sci, Chongqing 401331, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 22期
关键词
traffic signal control; offline reinforcement learning; deep learning; NETWORK;
D O I
10.3390/app142210165
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The acceleration of urbanization has led to increasingly severe traffic congestion, creating an urgent need for effective traffic signal control strategies to improve road efficiency. This paper proposes an adaptive traffic signal control method based on offline reinforcement learning (Offline RL) to address the limitations of traditional fixed-time signal control methods. By monitoring key parameters such as real-time traffic flow and queue length, the proposed method dynamically adjusts signal phases and durations in response to rapidly changing traffic conditions. At the core of this research is the design of a model named SD3-Light, which leverages advanced offline reinforcement learning to predict the optimal signal phase sequences and their durations based on real-time intersection state features. Additionally, this paper constructs a comprehensive offline dataset, which enables the model to be trained without relying on real-time traffic data, thereby reducing costs and improving the model's generalization ability. Experiments conducted on real-world traffic datasets demonstrate the effectiveness of the proposed method in reducing the average travel time. Comparisons with several existing methods highlight the clear advantages of our approach in enhancing traffic management efficiency.
引用
收藏
页数:12
相关论文
共 22 条
[1]   Satellite Integration into 5G: Deep Reinforcement Learning for Network Selection [J].
De Santis, Emanuele ;
Giuseppi, Alessandro ;
Pietrabissa, Antonio ;
Capponi, Michael ;
Priscoli, Francesco Delli .
MACHINE INTELLIGENCE RESEARCH, 2022, 19 (02) :127-137
[2]  
Deffayet R., 2023, P ACM SIGIR FOR, VVolume 56, P1
[3]   Development of Assessment Tool and Overview of Adaptive Traffic Control Deployments in the U.S [J].
Dobrota, Nemanja ;
Stevanovic, Aleksandar ;
Mitrovic, Nikola .
TRANSPORTATION RESEARCH RECORD, 2020, 2674 (12) :464-480
[4]   A real-time adaptive signal control in a connected vehicle environment [J].
Feng, Yiheng ;
Head, K. Larry ;
Khoshmagham, Shayan ;
Zamanipour, Mehdi .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2015, 55 :460-473
[5]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[6]  
Kumar A., 2020, Adv Neural Inf Process Syst, V33, P1179
[7]   Offline Pre-trained Multi-agent Decision Transformer [J].
Meng, Linghui ;
Wen, Muning ;
Le, Chenyang ;
Li, Xiyun ;
Xing, Dengpeng ;
Zhang, Weinan ;
Wen, Ying ;
Zhang, Haifeng ;
Wang, Jun ;
Yang, Yaodong ;
Xu, Bo .
MACHINE INTELLIGENCE RESEARCH, 2023, 20 (02) :233-248
[8]  
MILLER AJ, 1963, OPER RES QUART, V14, P373, DOI 10.2307/3006800
[9]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[10]  
Pan L, 2020, ADV NEUR IN, V33