Echo State Networks for Self-Organizing Resource Allocation in LTE-U With Uplink-Downlink Decoupling

被引:89
作者
Chen, Mingzhe [1 ,2 ]
Saad, Walid [3 ,4 ]
Yin, Changchuan [1 ,2 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing Lab Adv Informat Network, Beijing 100876, Peoples R China
[2] Beijing Univ Posts & Telecommun, Beijing Key Lab Network Syst Architecture & Conve, Beijing 100876, Peoples R China
[3] Virginia Tech, Bradley Dept Elect & Comp Engn, Wireless VT, Blacksburg, VA 24061 USA
[4] Kyung Hee Univ, Dept Comp Sci & Engn, Hoegi Dong, South Korea
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Game theory; resource allocation; heterogeneous networks; reinforcement learning; LTE-U; INTERFERENCE MANAGEMENT;
D O I
10.1109/TWC.2016.2616400
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Uplink-downlink decoupling in which users can be associated to different base stations in the uplink and downlink of heterogeneous small cell networks (SCNs) has attracted significant attention recently. However, most existing works focus on simple association mechanisms in LTE SCNs that operate only in the licensed band. In contrast, in this paper, the problem of resource allocation with uplink-downlink decoupling is studied for an SCN that incorporates LTE in the unlicensed band. Here, the users can access both licensed and unlicensed bands while being associated to different base stations. This problem is formulated as a noncooperative game that incorporates user association, spectrum allocation, and load balancing. To solve this problem, a distributed algorithm based on the machine learning framework of echo state networks (ESNs) is proposed. This proposed algorithm allows the small base stations to autonomously choose their optimal resource allocation strategies given only limited information on the network's and users' states. It is shown that the proposed algorithm converges to a stationary mixed-strategy distribution, which constitutes a mixed strategy Nash equilibrium for their studied game. Simulation results show that the proposed approach yields significant gain, in terms of the sum-rate of the 50th percentile of users, that reaches up to 167% compared with a Q-learning algorithm. The results also show that the ESN significantly provides a considerable reduction of information exchange for the wireless network.
引用
收藏
页码:3 / 16
页数:14
相关论文
共 38 条
[1]  
[Anonymous], 2015, CISC VIS NETW IND GL
[2]  
[Anonymous], 2013, PROC IEEE VTC SPRING
[3]  
[Anonymous], 2008, IEEE T VEHICULAR TEC
[4]  
Bauduin M., 2015, P IEEE 81 VEH TECHN, P1, DOI DOI 10.1109/VTCSPRING.2015.7145827
[5]  
Bennis M, 2011, IEEE ICC
[6]  
Bennis M., 2010, 2010 IEEE Globecom Workshops (GC'10), P706, DOI 10.1109/GLOCOMW.2010.5700414
[7]   Self-Organization in Small Cell Networks: A Reinforcement Learning Approach [J].
Bennis, Mehdi ;
Perlaza, Samir M. ;
Blasco, Pol ;
Han, Zhu ;
Poor, H. Vincent .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2013, 12 (07) :3202-3212
[8]   Performance analysis,of the IEEE 802.11 distributed coordination function [J].
Bianchi, G .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2000, 18 (03) :535-547
[9]   Why to Decouple the Uplink and Downlink in Cellular Networks and How To Do It [J].
Boccardi, Federico ;
Andrews, Jeffrey ;
Elshaer, Hisham ;
Dohler, Mischa ;
Parkvall, Stefan ;
Popovski, Petar ;
Singh, Sarabjot .
IEEE COMMUNICATIONS MAGAZINE, 2016, 54 (03) :110-117
[10]   Modeling reward functions for incomplete state representations via echo state networks [J].
Bush, K ;
Anderson, C .
Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vols 1-5, 2005, :2995-3000