Optimization of Energy Efficiency for Uplink mURLLC Over Multiple Cells Using Cooperative Multiagent Reinforcement Learning

被引:1
作者
Song, Qingjiao [1 ]
Zheng, Fu-Chun [1 ]
Luo, Jingjing [1 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Shenzhen 518055, Peoples R China
关键词
Energy efficiency (EE); massive ultrareliable and low-latency communications (mURLLC); multiagent reinforcement learning; multicell cellular networks; GRANT-FREE NOMA; LOW-LATENCY COMMUNICATIONS; URLLC; ACCESS; NETWORKS; CHANNEL; DESIGN;
D O I
10.1109/JIOT.2024.3353185
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multiagent reinforcement learning (RL) has recently been adopted to solve massive ultrareliable and low-latency communications (mURLLC) energy efficiency (EE) optimization problem in a single-cell cellular network under random access. Bursty traffic is an important characteristic of mURLLC users (UEs). This characteristic and its impact on the RL scheme are generally ignored in many RL-based studies related to the optimization of EE for uplink mURLLC. Moreover, in a smart factory with multiple cells, intercell interference and shadow fading further complicate EE optimization. To address these issues, we propose a novel cooperative multiagent scheme to maximize the long-term EE in a multicell cellular network with mURLLC bursty traffic and a K-repetition scheme by optimizing the repetition value and transmission power. A UE clustering algorithm and an intermittent learning mode are adopted to reduce the computational complexity and mitigate the impact of bursty traffic on the RL scheme. A proper reward function is designed to address both long-term EE maximization and the number of successfully served UEs under high-reliability requirement. The simulation results show that our proposed cooperative multiagent reinforcement learning scheme greatly outperforms other existing schemes in terms of long-term accumulated EE and the number of successfully served UEs.
引用
收藏
页码:16351 / 16363
页数:13
相关论文
共 38 条
[1]   A Reliable Reinforcement Learning for Resource Allocation in Uplink NOMA-URLLC Networks [J].
Ahsan, Waleed ;
Yi, Wenqiang ;
Liu, Yuanwei ;
Nallanathan, Arumugam .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (08) :5989-6002
[2]  
[Anonymous], 2020, Rep. TS 38.913
[3]  
[Anonymous], 2017, Rep. TR 38.802
[4]  
[Anonymous], 2020, Rep. TS 38.214
[5]  
[Anonymous], 2020, Rep. TS 38.211
[6]   On using Deep Reinforcement Learning to reduce Uplink Latency for uRLLC services [J].
Boutiba, Karim ;
Bagaa, Miloud ;
Ksentini, Adlen .
2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, :407-412
[7]   Distributive Dynamic Spectrum Access Through Deep Reinforcement Learning: A Reservoir Computing-Based Approach [J].
Chang, Hao-Hsuan ;
Song, Hao ;
Yi, Yang ;
Zhang, Jianzhong ;
He, Haibo ;
Liu, Lingjia .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (02) :1938-1948
[8]   Transmit Power Pool Design for Grant-Free NOMA-IoT Networks via Deep Reinforcement Learning [J].
Fayaz, Muhammad ;
Yi, Wenqiang ;
Liu, Yuanwei ;
Nallanathan, Arumugam .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (11) :7626-7641
[9]   Wireless Communication for Factory Automation: An Opportunity for LTE and 5G Systems [J].
Holfeld, Bernd ;
Wieruch, Dennis ;
Wirth, Thomas ;
Thiele, Lars ;
Ashraf, Shehzad Ali ;
Huschke, Joerg ;
Aktas, Ismet ;
Ansari, Junaid .
IEEE COMMUNICATIONS MAGAZINE, 2016, 54 (06) :36-43
[10]  
Jacobsen T, 2017, IEEE GLOBE WORK