Transmit Power Pool Design for Uplink IoT Networks with Grant-free NOMA

被引:1
作者
Fayaz, Muhammad [1 ,2 ]
Yi, Wenqiang [1 ]
Liu, Yuanwei [1 ]
Nallanathan, Arumugam [1 ]
机构
[1] Queen Mary Univ London, London, England
[2] Univ Malakand, Lower Dir, Khyber Pakhtunk, Pakistan
来源
IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021) | 2021年
基金
英国工程与自然科学研究理事会;
关键词
NONORTHOGONAL MULTIPLE-ACCESS;
D O I
10.1109/ICC42927.2021.9500849
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Grant-free non-orthogonal multiple access (GFNOMA) is a potential multiple access framework for internet-of-things (IoT) networks to enhance connectivity. However, the resource allocation problem in GF-NOMA is challenging and the effectiveness of such a solution is limited due to the absence of closed-loop power control. In this paper, we design a prototype of layer-based transmit power pool by utilizing multi-agent reinforcement learning to provide open-loop power control and offload the computing tasks at the base station (BS) side. IoT users in each layer decide their own transmit power level from this layer-based power pool, instead of transmitting on the allocated sub-channel with allocated transmit power level. The proposed algorithm does not require any information exchange between IoT users and does not rely on any assistance from the BS. Numerical results confirm that the double deep Q network based GF-NOMA algorithm achieves high accuracy and finds out an accurate transmit power level for each layer. Moreover, the proposed GF-NOMA system outperforms the traditional GF with orthogonal multiple access techniques in terms of throughput.
引用
收藏
页数:6
相关论文
共 14 条
[1]   A Novel Analytical Framework for Massive Grant-Free NOMA [J].
Abbas, Rana ;
Shirvanimoghaddam, Mahyar ;
Li, Yonghui ;
Vucetic, Branka .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2019, 67 (03) :2436-2449
[2]  
[Anonymous], 2005, Learning theory course
[3]   Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning [J].
Liang, Le ;
Ye, Hao ;
Li, Geoffrey Ye .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (10) :2282-2292
[4]   Nonorthogonal Multiple Access for 5G and Beyond [J].
Liu, Yuanwei ;
Qin, Zhijin ;
Elkashlan, Maged ;
Ding, Zhiguo ;
Nallanathan, Arumugam ;
Hanzo, Lajos .
PROCEEDINGS OF THE IEEE, 2017, 105 (12) :2347-2381
[5]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[6]   Non-Cooperative Energy Efficient Power Allocation Game in D2D Communication: A Multi-Agent Deep Reinforcement Learning Approach [J].
Nguyen, Khoi Khac ;
Duong, Trung Q. ;
Vien, Go Anh ;
Le-Khac, Nhien-An ;
Minh-Nghia Nguyen .
IEEE ACCESS, 2019, 7 :100480-100490
[7]   Grant-Free Non-Orthogonal Multiple Access for IoT: A Survey [J].
Shahab, Muhammad Basit ;
Abbas, Rana ;
Shirvanimoghaddam, Mahyar ;
Johnson, Sarah J. .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2020, 22 (03) :1805-1838
[8]   Toward Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions [J].
Sharma, Shree Krishna ;
Wang, Xianbin .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2020, 22 (01) :426-471
[9]  
Sutton RS, 2018, ADAPT COMPUT MACH LE, P1
[10]  
van Hasselt H., 2015, Deep Reinforcement Learning with Double Q-learning