Deep Reinforcement Learning for Multi-Agent Power Control in Heterogeneous Networks

被引:38
作者
Zhang, Lin [1 ]
Liang, Ying-Chang [2 ]
机构
[1] Univ Elect Sci & Technol China, Key Lab Commun, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Ctr Intelligent Networking & Commun, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
Power control; Wireless communication; Resource management; Interference; Heuristic algorithms; Rayleigh channels; Reinforcement learning; DRL; multi-agent; power control; MASC; HetNet; RESOURCE-ALLOCATION; FEEDBACK; ACCESS;
D O I
10.1109/TWC.2020.3043009
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We consider a typical heterogeneous network (HetNet), in which multiple access points (APs) are deployed to serve users by reusing the same spectrum band. Since different APs and users may cause severe interference to each other, advanced power control techniques are needed to manage the interference and enhance the sum-rate of the whole network. Conventional power control techniques first collect instantaneous global channel state information (CSI) and then calculate sub-optimal solutions. Nevertheless, it is challenging to collect instantaneous global CSI in the HetNet, in which global CSI typically changes fast. In this article, we exploit deep reinforcement learning (DRL) to design a multi-agent power control algorithm, which has a centralized-training-distributed-execution framework. To be specific, each AP acts as an agent with a local deep neural network (DNN) and we propose a multiple-actor-shared-critic (MASC) method to train the local DNNs separately in an online trial-and-error manner. With the proposed algorithm, each AP can independently use the local DNN to control the transmit power with only local observations. Simulations results show that the proposed algorithm outperforms the conventional power control algorithms in terms of both the converged average sum-rate and the computational complexity.
引用
收藏
页码:2551 / 2564
页数:14
相关论文
共 40 条
[1]   Next Generation 5G Wireless Networks: A Comprehensive Survey [J].
Agiwal, Mamta ;
Roy, Abhishek ;
Saxena, Navrati .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2016, 18 (03) :1617-1655
[2]   Reinforcement Learning for Self Organization and Power Control of Two-Tier Heterogeneous Networks [J].
Amiri, Roohollah ;
Almasi, Mojtaba Ahmadi ;
Andrews, Jeffrey G. ;
Mehrpouyan, Hani .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2019, 18 (08) :3933-3947
[3]   What Will 5G Be? [J].
Andrews, Jeffrey G. ;
Buzzi, Stefano ;
Choi, Wan ;
Hanly, Stephen V. ;
Lozano, Angel ;
Soong, Anthony C. K. ;
Zhang, Jianzhong Charlie .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2014, 32 (06) :1065-1082
[4]  
[Anonymous], 2014, INTERPRETING QUALITA
[5]  
[Anonymous], 25942 3GPP TR
[6]   Spatial Deep Learning for Wireless Scheduling [J].
Cui, Wei ;
Shen, Kaiming ;
Yu, Wei .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (06) :1248-1261
[7]   Deep-Reinforcement-Learning-Based Optimization for Cache-Enabled Opportunistic Interference Alignment Wireless Networks [J].
He, Ying ;
Zhang, Zheng ;
Yu, F. Richard ;
Zhao, Nan ;
Yin, Hongxi ;
Leung, Victor C. M. ;
Zhang, Yanhua .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2017, 66 (11) :10433-10445
[8]   An energy efficient and spectrum efficient wireless heterogeneous network framework for 5G systems [J].
1600, Institute of Electrical and Electronics Engineers Inc., United States (52) :94-101
[9]   Deep Reinforcement Learning for UAV Navigation Through Massive MIMO Technique [J].
Huang, Hongji ;
Yang, Yuchun ;
Wang, Hong ;
Ding, Zhiguo ;
Sari, Hikmet ;
Adachi, Fumiyuki .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (01) :1117-1121
[10]   Distributed interference compensation for wireless networks [J].
Huang, JW ;
Berry, RA ;
Honig, ML .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2006, 24 (05) :1074-1084