Radio Resource Management for C-V2X: From a Hybrid Centralized-Distributed Scheme to a Distributed Scheme

被引:19
作者
Guo, Chi [1 ]
Wang, Cong [1 ]
Cui, Lin [1 ]
Zhou, Qiuzhan [1 ]
Li, Juan [1 ]
机构
[1] Jilin Univ, Coll Commun Engn, Changchun 130022, Peoples R China
基金
中国国家自然科学基金;
关键词
Reliability; Channel allocation; Resource management; Power control; Optimization; Distributed algorithms; Computational complexity; C-V2X; radio resource management; graph matching; reinforcement learning; POWER ALLOCATION; SPECTRUM; COMMUNICATION; LATENCY;
D O I
10.1109/JSAC.2023.3242723
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Spectrum sharing in cellular vehicle-to-everything (C-V2X) has been conceived as a promising solution to improve spectrum efficiency. However, the co-channel interference incurred with it may cause severe performance degradation to vehicular links. Thereby, radio resource management (RRM) is motivated and designed to ensure communication reliability and increase system capacity. One challenge is that RRM involves channel allocation and power control, which are tightly coupled and hard to optimize simultaneously. Another challenge for this is the difficulty adapting centralized RRM schemes, requiring global channel state information (CSI) and causing high signaling overhead. To tackle these challenges, we propose the hybrid centralized-distributed RRM scheme and the distributed RRM scheme. Specifically, we prove a decoupling method that provides a theoretical lower bound so that channel allocation and power control can be optimized independently. Given the decoupling method, the hybrid centralized-distributed RRM scheme is based on graph matching and reinforcement learning (GMRL) to maximize system capacity and guarantee reliability requirements. Further, to decrease computation complexity and signaling overhead, the distributed RRM scheme that only requires local CSI with hybrid-framework reinforcement learning (HFRL) is exploited. Finally, both schemes are numerically evaluated through experiments and outperform other deep Q-network (DQN)-based schemes.
引用
收藏
页码:1023 / 1034
页数:12
相关论文
共 32 条
[1]  
3GPP, 2017, TS22886 3GPP
[2]  
3GPP, 2016, TS36885 3GPP
[3]   Ultra-Reliable and Low-Latency Vehicular Communication: An Active Learning Approach [J].
Abdel-Aziz, Mohamed K. ;
Samarakoon, Sumudu ;
Bennis, Mehdi ;
Saad, Walid .
IEEE COMMUNICATIONS LETTERS, 2020, 24 (02) :367-370
[4]   Secure LTE-Based V2X Service [J].
Ahmed, Kazi J. ;
Lee, Myung J. .
IEEE INTERNET OF THINGS JOURNAL, 2018, 5 (05) :3724-3732
[5]   Device-to-Device Communications Underlaying Cellular Networks [J].
Feng, Daquan ;
Lu, Lu ;
Yi Yuan-Wu ;
Li, Geoffrey Ye ;
Feng, Gang ;
Li, Shaoqian .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2013, 61 (08) :3541-3551
[6]  
Gjendemsjo A., 2006, 4 INT S ONMODELING O, P1
[7]   Analytical Models of the Performance of C-V2X Mode 4 Vehicular Communications [J].
Gonzalez-Martin, Manuel ;
Sepulcre, Miguel ;
Molina-Masegosa, Rafael ;
Gozalvez, Javier .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (02) :1155-1166
[8]   Resource Allocation in Vehicular Communications using Graph and Deep Reinforcement Learning [J].
Gyawali, Sohan ;
Qian, Yi ;
Hu, Rose Qingyang .
2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
[9]   Multi-Agent Deep Reinforcement Learning for Computation Offloading and Interference Coordination in Small Cell Networks [J].
Huang, Xiaoyan ;
Leng, Supeng ;
Maharjan, Sabita ;
Zhang, Yan .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (09) :9282-9293
[10]   Multi-Agent Deep Reinforcement Learning Based Spectrum Allocation for D2D Underlay Communications [J].
Li, Zheng ;
Guo, Caili .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (02) :1828-1840