CVDMARL: A Communication-Enhanced Value Decomposition Multi-Agent Reinforcement Learning Traffic Signal Control Method

被引:4
作者
Chang, Ande [1 ]
Ji, Yuting [2 ]
Wang, Chunguang [3 ]
Bie, Yiming [2 ]
机构
[1] Criminal Invest Police Univ China, Coll Forens Sci, Shenyang 110035, Peoples R China
[2] Jilin Univ, Sch Transportat, Changchun 130022, Peoples R China
[3] Xi An Jiao Tong Univ, Sch Aerosp Engn, State Key Lab Strength & Vibrat Mech Struct, Xian 710049, Peoples R China
关键词
traffic signal control; deep reinforcement learning; multi-agent reinforcement learning; communication; traffic congestion; PREDICTION;
D O I
10.3390/su16052160
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Effective traffic signal control (TSC) plays an important role in reducing vehicle emissions and improving the sustainability of the transportation system. Recently, the feasibility of using multi-agent reinforcement learning technology for TSC has been widely verified. However, the process of mapping road network states onto actions has encountered many challenges, due to the limited communication between agents and the partial observability of the traffic environment. To address this problem, this paper proposes a communication-enhancement value decomposition, multi-agent reinforcement learning TSC method (CVDMARL). The model combines two communication methods: implicit and explicit communication, decouples the complex relationships among the multi-signal agents through the centralized-training and decentralized-execution paradigm, and uses a modified deep network to realize the mining and selective transmission of traffic flow features. We compare and analyze CVDMARL with six different baseline methods based on real datasets. The results show that compared to the optimal method MN_Light, among the baseline methods, CVDMARL's queue length during peak hours was reduced by 9.12%, the waiting time was reduced by 7.67%, and the convergence algebra was reduced by 7.97%. While enriching the information content, it also reduces communication overhead and has better control effects, providing a new idea for solving the collaborative control problem of multi-signalized intersections.
引用
收藏
页数:17
相关论文
共 47 条
[1]   Multi-Agent Reinforcement Learning Based on Representational Communication for Large-Scale Traffic Signal Control [J].
Bokade, Rohit ;
Jin, Xiaoning ;
Amato, Christopher .
IEEE ACCESS, 2023, 11 :47646-47658
[2]   Traffic Signal Control Using Hybrid Action Space Deep Reinforcement Learning [J].
Bouktif, Salah ;
Cheniki, Abderraouf ;
Ouni, Ali .
SENSORS, 2021, 21 (07)
[3]   A Collaborative Communication-Qmix Approach for Large-scale Networked Traffic Signal Control [J].
Chen, Xiaoyu ;
Xiong, Gang ;
Lv, Yisheng ;
Chen, Yuanyuan ;
Song, Bing ;
Wang, Fei-Yue .
2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, :3450-3455
[4]   Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control [J].
Chu, Tianshu ;
Wang, Jie ;
Codeca, Lara ;
Li, Zhaojian .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (03) :1086-1095
[5]   Deep Reinforcement Q-Learning for Intelligent Traffic Signal Control with Partial Detection [J].
Ducrocq, Romain ;
Farhi, Nadir .
INTERNATIONAL JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS RESEARCH, 2023, 21 (01) :192-206
[6]   Leveraging reinforcement learning for dynamic traffic control: A survey and challenges for field implementation [J].
Han, Yu ;
Wang, Meng ;
Leclercq, Ludovic .
COMMUNICATIONS IN TRANSPORTATION RESEARCH, 2023, 3
[7]   Deep Reinforcement Learning for Intelligent Transportation Systems: A Survey [J].
Haydari, Ammar ;
Yilmaz, Yasin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (01) :11-32
[8]   Intelligent Traffic Flow Prediction Using Optimized GRU Model [J].
Hussain, Basharat ;
Afzal, Muhammad Khalil ;
Ahmad, Shafiq ;
Mostafa, Almetwally M. .
IEEE ACCESS, 2021, 9 :100736-100746
[9]   Optimal electric bus fleet scheduling for a route with charging facility sharing [J].
Ji, Jinhua ;
Bie, Yiming ;
Wang, Linhong .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2023, 147
[10]   Energy-Efficient Driving for Adaptive Traffic Signal Control Environment via Explainable Reinforcement Learning [J].
Jiang, Xia ;
Zhang, Jian ;
Wang, Bo .
APPLIED SCIENCES-BASEL, 2022, 12 (11)