Resource Allocation in Uplink NOMA Systems: A Hybrid-Decision-Based Multi-Agent Deep Reinforcement Learning Approach

被引:0
作者
Xie, Xianzhong [1 ]
Li, Min [2 ]
Shi, Zhaoyuan [2 ]
Yang, Helin [3 ]
Huang, Qian [2 ]
Xiong, Zehui [4 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Commun, Informat Engn Dept, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Sch Comp Sci & Technol, Chongqing 400065, Peoples R China
[3] Xiamen Univ, Sch Informat, Dept Informat & Commun Engn, Xiamen 361005, Peoples R China
[4] Singapore Univ Technol & Design, Pillar Informat Syst Technol & Design, Singapore 487372, Singapore
基金
中国国家自然科学基金;
关键词
Non-orthogonal multiple access (NOMA); actor-critic (AC); multi-agent; power control; channel selection; user dynamic access; NONORTHOGONAL MULTIPLE-ACCESS; POWER ALLOCATION; JOINT POWER; NETWORKS; ASSIGNMENT; SUBCHANNEL;
D O I
10.1109/TVT.2023.3289567
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This correspondence investigates a joint power control, channel selection, and user dynamic access scheme to maximize the sum rate of non-orthogonal multiple access (NOMA) systems. The highly dynamic and uncertain environment hinders the collection of accurate instantaneous channel state information (CSI) at the base station for centralized resource management. Therefore, we propose a hybrid-decision-based multi-agent actor-critic (HD-MAAC) approach to optimize the sum rate of the system. The proposed scheme improves the standard actor-critic (AC) algorithm to obtain both discrete and continuous actions. Simulation results verify that the proposed scheme achieves higher sum rate than that of existing popular schemes under different settings.
引用
收藏
页码:16760 / 16765
页数:6
相关论文
共 20 条
[1]   Space/Aerial-Assisted Computing Offloading for IoT Applications: A Learning-Based Approach [J].
Cheng, Nan ;
Lyu, Feng ;
Quan, Wei ;
Zhou, Conghao ;
He, Hongli ;
Shi, Weisen ;
Shen, Xuemin .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (05) :1117-1129
[2]   A Survey of Non-Orthogonal Multiple Access for 5G [J].
Dai, Linglong ;
Wang, Bichai ;
Ding, Zhiguo ;
Wang, Zhaocheng ;
Chen, Sheng ;
Hanzo, Lajos .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2018, 20 (03) :2294-2323
[3]   Single and Multi-Agent Deep Reinforcement Learning for AI-Enabled Wireless Networks: A Tutorial [J].
Feriani, Amal ;
Hossain, Ekram .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2021, 23 (02) :1226-1252
[4]   Joint Power and Sub-Channel Allocation for Secure Transmission in NOMA-Based mMTC Networks [J].
Han, Shujun ;
Xu, Xiaodong ;
Tao, Xiaofeng ;
Zhang, Ping .
IEEE SYSTEMS JOURNAL, 2019, 13 (03) :2476-2487
[5]   Deep Learning for Video Game Playing [J].
Justesen, Niels ;
Bontrager, Philip ;
Togelius, Julian ;
Risi, Sebastian .
IEEE TRANSACTIONS ON GAMES, 2020, 12 (01) :1-20
[6]   Joint User Pairing, Subchannel Assignment and Power Allocation in Cooperative Non-Orthogonal Multiple Access Networks [J].
Lamba, Amanjot Kaur ;
Kumar, Ravi ;
Sharma, Sanjay .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (10) :11790-11799
[7]   Multi-Agent Deep Reinforcement Learning Based Spectrum Allocation for D2D Underlay Communications [J].
Li, Zheng ;
Guo, Caili .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (02) :1828-1840
[8]   Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning [J].
Liang, Le ;
Ye, Hao ;
Li, Geoffrey Ye .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (10) :2282-2292
[9]   β-Dropout: A Unified Dropout [J].
Liu, Lei ;
Luo, Yuhao ;
Shen, Xu ;
Sun, Mingzhai ;
Li, Bin .
IEEE ACCESS, 2019, 7 :36140-36153
[10]   Applications of Deep Reinforcement Learning in Communications and Networking: A Survey [J].
Luong, Nguyen Cong ;
Hoang, Dinh Thai ;
Gong, Shimin ;
Niyato, Dusit ;
Wang, Ping ;
Liang, Ying-Chang ;
Kim, Dong In .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2019, 21 (04) :3133-3174