Non-cooperative multi-agent deep reinforcement learning for channel resource allocation in vehicular networks

被引:0
作者
Zhang, Fuxin [1 ]
Yao, Sihan [1 ]
Liu, Wei [1 ]
Qi, Liang [1 ]
机构
[1] Shandong Univ Sci & Technol, Coll Comp Sci & Engn, Qingdao, Shandong, Peoples R China
关键词
C-V2X networks; V2V communications; Resource allocation; Non-cooperative game; Multi-agent deep reinforcement learning; OPTIMAL POWER-CONTROL; PERFORMANCE ANALYSIS; C-V2X;
D O I
10.1016/j.comnet.2024.111006
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Vehicle-to-vehicle (V2V) communication is a critical technology in supporting vehicle safety applications in vehicular networks. The high mobility in vehicular networks makes the channel state change rapidly, which poses significant challenges to reliable V2V communications. The traditional resource allocation methods neglect the fair requirement and cannot guarantee reliable transmission for each V2V link. In this paper, we first develop a network pay-off function to characterize the measure of satisfaction that a V2V link obtains from the network. Based on the pay-off function, the resource allocation problem among V2V links is formulated as a non-cooperation game problem. A non-cooperative multi-agent reinforcement learning method for resource sharing is then constructed. In this method, each V2V link is treated as an agent. Each agent interacts with unknown environments and neighboring agents to learn the best spectrum allocation and power control policy to reach a Nash equilibrium point for each V2V link, where they obtain fair transmissions and achieve reliable communications under different network scenarios. Experimental results indicate that our proposed method outperforms other benchmark schemes by more than 10% in packet delivery probability while achieving fair transmissions for V2V links.
引用
收藏
页数:16
相关论文
共 52 条
[1]  
3rd Generation Partnership Project, 2016, TR 36.885
[2]   Geo-Based Resource Allocation for Joint Clustered V2I and V2V Communications in Cellular Networks [J].
Alrubaye, Jaafar Sadiq ;
Ghahfarokhi, Behrouz Shahgholi .
IEEE ACCESS, 2023, 11 :82601-82612
[3]  
[Anonymous], 2016, 3GPP TSG RAN WG1 M 8
[4]  
[Anonymous], 2007, WINNER II Channel Models, IST-4-027756WINNERIID1.1.2V1.2
[5]  
Brahmi I., 2020, 2020 8 INT C COMM NE, P1
[6]   Learning-Based Resource Allocation for Ultra-Reliable V2X Networks With Partial CSI [J].
Chai, Guanhua ;
Wu, Weihua ;
Yang, Qinghai ;
Liu, Runzi ;
Yu, F. Richard .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (10) :6532-6546
[7]  
Chen MM, 2019, IEEE WCNC
[8]   Radio Resource Selection in C-V2X Mode 4: A Multiagent Deep Reinforcement Learning Approach [J].
Chen, Weixiang ;
Gu, Bo ;
Tan, Xiaojun ;
Wei, Chenhua .
2022 31ST INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2022), 2022,
[9]   Game-Theoretic Power Allocation and the Nash Equilibrium Analysis for a Multistatic MIMO Radar Network [J].
Deligiannis, Anastasios ;
Panoui, Anastasia ;
Lambotharan, Sangarapillai ;
Chambers, Jonathon A. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2017, 65 (24) :6397-6408
[10]  
Fan YW, 2017, IEEE GLOB COMM CONF