Liquid State Machine Learning for Resource and Cache Management in LTE-U Unmanned Aerial Vehicle (UAV) Networks

被引:147
作者
Chen, Mingzhe [1 ]
Saad, Walid [2 ]
Yin, Changchuan [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing Lab Adv Informat Network, Beijing 100876, Peoples R China
[2] Virginia Tech, Bradley Dept Elect & Comp Engn, Wireless VT, Blacksburg, VA 24061 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Cache-enabled UAVs; LTE-U; resource allocation; machine learning; liquid state machine; TRAJECTORY DESIGN; MAXIMIZATION; EDGE; 5G;
D O I
10.1109/TWC.2019.2891629
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, the problem of joint caching and resource allocation is investigated for a network of cache-enabled unmanned aerial vehicles (UAVs) that service wireless ground users over the LTE licensed and unlicensed bands. The considered model focuses on users that can access both licensed and unlicensed bands while receiving contents from either the cache units at the UAVs directly or via content server-UAV-user links. This problem is formulated as an optimization problem, which jointly incorporates user association, spectrum allocation, and content caching. To solve this problem, a distributed algorithm based on the machine learning framework of liquid state machine (LSM) is proposed. Using the proposed LSM algorithm, the cloud can predict the users' content request distribution while having only limited information on the network's and users' states. The proposed algorithm also enables the UAVs to autonomously choose the optimal resource allocation strategies that maximize the number of users with stable queues depending on the network states. Based on the users' association and content request distributions, the optimal contents that need to be cached at UAVs and the optimal resource allocation are derived. Simulation results using real datasets show that the proposed approach yields up to 17.8% and 57.1% gains, respectively, in terms of the number of users that have stable queues compared with two baseline algorithms: Q-learning with cache and Q-learning without cache. The results also show that the LSM significantly improves the convergence time of up to 20% compared with conventional learning algorithms such as Q-learning.
引用
收藏
页码:1504 / 1517
页数:14
相关论文
共 38 条
[1]  
Al-Hourani A, 2014, IEEE GLOB COMM CONF, P2898, DOI 10.1109/GLOCOM.2014.7037248
[2]  
[Anonymous], 2017, JOINT RADIO RESOURCE
[3]  
[Anonymous], SURVEY AIR TO GROUND
[4]  
[Anonymous], P INT C MOB AD HOC S
[5]  
[Anonymous], 2016, IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications
[6]  
Athukoralage D., 2016, IEEE GLOBECOM, P1
[7]   Living on the Edge: The Role of Proactive Caching in 5G Wireless Networks [J].
Bastug, Ejder ;
Bennis, Mehdi ;
Debbah, Merouane .
IEEE COMMUNICATIONS MAGAZINE, 2014, 52 (08) :82-89
[8]  
Bennis M., 2010, 2010 IEEE Globecom Workshops (GC'10), P706, DOI 10.1109/GLOCOMW.2010.5700414
[9]   Performance analysis,of the IEEE 802.11 distributed coordination function [J].
Bianchi, G .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2000, 18 (03) :535-547
[10]   The New Frontier in RAN Heterogeneity: Multi-Tier Drone-Cells [J].
Bor-Yaliniz, Irem ;
Yanikomeroglu, Halim .
IEEE COMMUNICATIONS MAGAZINE, 2016, 54 (11) :48-55