Reinforcement Learning for User Clustering in NOMA-enabled Uplink IoT

被引:9
作者
Ahsan, Waleed [1 ]
Yi, Wenqiang [1 ]
Liu, Yuanwei [1 ]
Qin, Zhijin [1 ]
Nallanathan, Arumugam [1 ]
机构
[1] Queen Mary Univ London, London, England
来源
2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS) | 2020年
基金
英国工程与自然科学研究理事会;
关键词
ALLOCATION;
D O I
10.1109/iccworkshops49005.2020.9145187
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The model-driven algorithms have been investigated in wireless communications for decades. Presently, the model-free methods based on machine learning techniques are rapidly being developed in the field of non-orthogonal multiple access (NOMA) to dynamically optimize multiples parameters (e.g., number of resource blocks and QoS). With the aid of SARSA Q-learning and Deep reinforcement Learning (DRL), in this paper we proposed a user clustering based resource allocation with uplink NOMA techniques in multi-cell systems. It performs user grouping based on network traffic to efficiently utilise the available resources, we apply SARSA Q-learning to light and DRL to heavy network traffic. To characterize the performance of the proposed optimization algorithms, achieved the capacity for all the users is used to define the reward function. The proposed SARSA Q-learning and DRL algorithms are capable of assisting base-stations to efficiently assign available resources to IoT users considering different traffic conditions. As a result, simulation outcomes show that both the algorithms, SARSA Q-learning and DRL performed better than orthogonal multiple access (OMA) in all the experiments and converged with maximum sum-rate.
引用
收藏
页数:6
相关论文
共 50 条
[41]   Adaptive Resource Optimization for IoT-Enabled Disaster-Resilient Non-Terrestrial Networks using Deep Reinforcement Learning [J].
Jeribi, Fathe ;
Martin, R. John .
RADIOENGINEERING, 2025, 34 (02) :243-257
[42]   Network sum-rate maximization for network-coded clustered uplink NOMA networks with SWIPT-enabled relays [J].
Baidas, Mohammed W. ;
Abdelghaffar, Ahmed M. ;
Alsusa, Emad .
COMPUTER NETWORKS, 2024, 244
[43]   Voting-Based Multiagent Reinforcement Learning for Intelligent IoT [J].
Xu, Yue ;
Deng, Zengde ;
Wang, Mengdi ;
Xu, Wenjun ;
So, Anthony Man-Cho ;
Cui, Shuguang .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (04) :2681-2693
[44]   Secrecy Performance Analysis and Optimization for UAV-Relay-Enabled WPT and Cooperative NOMA MEC in IoT Networks [J].
Nguyen, Anh-Nhat ;
Ha, Dac-Binh ;
Truong, Truong Van ;
Vo, Van Nhan ;
Sanguanpong, Surasak ;
So-In, Chakchai .
IEEE ACCESS, 2023, 11 :127800-127816
[45]   Joint User Pairing and Beamforming Design of Multi-STAR-RISs-Aided NOMA in the Indoor Environment via Multi-Agent Reinforcement Learning [J].
Park, Yu Min ;
Tun, Yan Kyaw ;
Hong, Choong Seon .
PROCEEDINGS OF 2024 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, NOMS 2024, 2024,
[46]   Experimental demonstration of a real-time multi-user uplink UWOC system based on SIC-free NOMA [J].
Li, Xiao ;
Gui, Liangqi ;
Xia, Yu ;
Yang, Xiaojiao ;
Li, Yinan ;
Li, Hao ;
Lang, Liang .
OPTICS EXPRESS, 2023, 31 (19) :30146-30159
[47]   Efficient User Pairing and Resource Optimization for NOMA-OMA Switching Enabled Dynamic Urban Vehicular Networks [J].
Balaraman, Aravindh ;
Shioda, Shigeo ;
Kim, Yonggang ;
Kim, Yohan ;
Kim, Taewoon .
ELECTRONICS, 2024, 13 (23)
[48]   Uplink Power Control Framework Based on Reinforcement Learning for 5G Networks [J].
Costa Neto, Francisco Hugo ;
Araujo, Daniel Costa ;
Mota, Mateus Pontes ;
Maciel, Tarcisio F. ;
de Almeida, Andr L. F. .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (06) :5734-5748
[49]   Deep Reinforcement Learning for IRS-assisted Secure NOMA Transmissions Against Eavesdroppers [J].
Zhou, Defeng ;
Gong, Shimin ;
Li, Lanhua ;
Gu, Bo ;
Guizani, Mohsen .
20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, :1236-1241
[50]   Multiuser Resource Control With Deep Reinforcement Learning in IoT Edge Computing [J].
Lei, Lei ;
Xu, Huijuan ;
Xiong, Xiong ;
Zheng, Kan ;
Xiang, Wei ;
Wang, Xianbin .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (06) :10119-10133