Distributed Learning-Based Resource Allocation for Self-Organizing C-V2X Communication in Cellular Networks

被引:13
作者
Banitalebi, Najmeh [1 ]
Azmi, Paeiz [1 ]
Mokari, Nader [1 ]
Arani, Atefeh Hajijamali [2 ]
Yanikomeroglu, Halim [3 ]
机构
[1] Tarbiat Modares Univ, Dept Elect & Comp Engn, Tehran 14115, Iran
[2] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON N2L 3G1, Canada
[3] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
来源
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY | 2022年 / 3卷
关键词
Resource management; Device-to-device communication; Q-learning; Games; Interference; Learning systems; Uplink; Cellular vehicle-to-everything (C-V2X) communication; PD-NOMA; resource allocation; learning algorithm; DEVICE-TO-DEVICE; POWER ALLOCATION; OPTIMIZATION; MANAGEMENT; DESIGN; ACCESS; GAMES;
D O I
10.1109/OJCOMS.2022.3211340
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we investigate a resource allocation problem for a Cellular Vehicle to Everything (C-V2X) network to improve energy efficiency of the system. To address this problem, self-organizing mechanisms are proposed for joint and disjoint subcarrier and power allocation procedures which are performed in a fully distributed manner. A multi-agent Q-learning algorithm is proposed for the joint power and subcarrier allocation. In addition, for the sake of simplicity, it is decoupled into two sub-problems: a subcarrier allocation sub-problem and a power allocation sub-problem. First, to allocate the subcarrier among users, a distributed Q-learning method is proposed. Then, given the optimal subcarriers, a dynamic power allocation mechanism is proposed where the problem is modeled as a non-cooperative game. To solve the problem, a no-regret learning algorithm is utilized. To evaluate the performance of the proposed approaches, other learning mechanisms are used which are presented in Fig. 8. Simulation results show the multi-agent joint Q-learning algorithm yields significant performance gains of up to about 11% and 18% in terms of energy efficiency compared to proposed disjoint mechanism and the third disjoint Q-learning mechanism for allocating the power and subcarrier to each user; however, the multi-agent joint Q-learning algorithm uses more memory than disjoint methods.
引用
收藏
页码:1719 / 1736
页数:18
相关论文
共 60 条
[1]   An Efficient Resource Allocation Algorithm for D2D Communications Based on NOMA [J].
Alemaishat, Salem ;
Saraereh, Omar A. ;
Khan, Imran ;
Choi, Bong Jun .
IEEE ACCESS, 2019, 7 :120238-120247
[2]  
[Anonymous], 2021, REP 22867
[3]  
[Anonymous], 2021, 21916 3GPP TR
[4]  
Arani A. H., 2016, PROC IEEE INT C COMM, P1
[5]   Distributed Learning for Energy-Efficient Resource Management in Self-Organizing Heterogeneous Networks [J].
Arani, Atefeh Hajijamali ;
Mehbodniya, Abolfazl ;
Omidi, Mohammad Javad ;
Adachi, Fumiyuki ;
Saad, Walid ;
Guvenc, Ismail .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2017, 66 (10) :9287-9303
[6]   On the Design of Sidelink for Cellular V2X: A Literature Review and Outlook for Future [J].
Bazzi, Alessandro ;
Berthet, Antoine O. ;
Campolo, Claudia ;
Masini, Barbara Mavi ;
Molinaro, Antonella ;
Zanella, Alberto .
IEEE ACCESS, 2021, 9 :97953-97980
[7]  
Bennis M., 2012, IEEE International Conference on Communications (ICC 2012), P1592, DOI 10.1109/ICC.2012.6364308
[8]  
Bertsekas DP, 1995, PROCEEDINGS OF THE 34TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-4, P560, DOI 10.1109/CDC.1995.478953
[9]   Proactive Resource Management for LTE in Unlicensed Spectrum: A Deep Learning Perspective [J].
Challita, Ursula ;
Dong, Li ;
Saad, Walid .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2018, 17 (07) :4674-4689
[10]   Resource Allocation for Device-to-Device Communications Underlaying Heterogeneous Cellular Networks Using Coalitional Games [J].
Chen, Yali ;
Ai, Bo ;
Niu, Yong ;
Guan, Ke ;
Han, Zhu .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2018, 17 (06) :4163-4176