Energy-Efficient Power Control and Resource Allocation Based on Deep Reinforcement Learning for D2D Communications in Cellular Networks

被引:3
作者
Alenezi, Sami [1 ]
Luo, Chunbo [1 ]
Min, Geyong [1 ]
机构
[1] Univ Exeter, Dept Comp Sci, Exeter EX4 4QF, Devon, England
来源
20TH INT CONF ON UBIQUITOUS COMP AND COMMUNICAT (IUCC) / 20TH INT CONF ON COMP AND INFORMATION TECHNOLOGY (CIT) / 4TH INT CONF ON DATA SCIENCE AND COMPUTATIONAL INTELLIGENCE (DSCI) / 11TH INT CONF ON SMART COMPUTING, NETWORKING, AND SERV (SMARTCNS) | 2021年
关键词
Reinforcement learning; D2D communications; Resource allocation; Power control; Energy efficiency; TO-DEVICE COMMUNICATION;
D O I
10.1109/IUCC-CIT-DSCI-SmartCNS55181.2021.00026
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Device-to-Device (D2D) communication has become a promising and new paradigm for enhancing network performance in cellular networks. D2D communication enables users to communicate directly without passing through the base station, thereby improving spectral efficiency and reducing communication delay. However, due to the intertwined interference environment, the shared spectrum and reused frequency may limit the network performance. In this paper, We propose a Proximal Policy Optimisation (PPO) algorithm based on Markov Decision Process (MDP) to optimise resource allocation and improve energy efficiency. Resource allocation and power control are jointly considered with the aim of maximising the overall throughput of the network while guaranteeing the minimum requirement of Quality of Service (QoS). Extensive simulation experiments are conducted to validate the efficacy of our proposed scheme. The results demonstrate that our method outperforms the traditional method in terms of energy efficiency and training time.
引用
收藏
页码:76 / 83
页数:8
相关论文
共 32 条
[1]  
[Anonymous], 2013, PLOS ONE, DOI [DOI 10.1371/journal.pone.0053995, DOI 10.1371/journal.pone.0053931]
[2]  
Chen Q, 2013, 2013 IEEE INT C SIGN, P13
[3]  
Chen W, 2019, REINFORCEMENT LEARNI, V286
[4]   A deep reinforcement learning for user association and power control in heterogeneous networks [J].
Ding, Hui ;
Zhao, Feng ;
Tian, Jie ;
Li, Dongyang ;
Zhang, Haixia .
AD HOC NETWORKS, 2020, 102
[5]   Device-to-Device Communication as an Underlay to LTE-Advanced Networks [J].
Doppler, Klaus ;
Rinne, Mika ;
Wijting, Carl ;
Ribeiro, Cassio B. ;
Hugl, Klaus .
IEEE COMMUNICATIONS MAGAZINE, 2009, 47 (12) :42-49
[6]  
Dun H., 2020, P IEEE USNC CNC URSI, P91
[7]  
Feki S, 2019, INT WIREL COMMUN, P1367, DOI [10.1109/IWCMC.2019.8766509, 10.1109/iwcmc.2019.8766509]
[8]   Device-to-Device Communications Underlaying Cellular Networks [J].
Feng, Daquan ;
Lu, Lu ;
Yi Yuan-Wu ;
Li, Geoffrey Ye ;
Feng, Gang ;
Li, Shaoqian .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2013, 61 (08) :3541-3551
[9]   Device-to-Device Communication in Cellular Networks: A Survey [J].
Gandotra, Pimmy ;
Jha, Rakesh Kumar .
JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2016, 71 :99-117
[10]  
Guizani Z, 2016, INT CONF SOFTW, P212