Joint Resource Allocation and Computation Offloading in Mobile Edge Computing for SDN based Wireless Networks

被引:78
作者
Kiran, Nahida [1 ,2 ]
Pan, Chunyu [1 ,2 ]
Wang, Sihua [1 ,2 ]
Yin, Changchuan [1 ,2 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing Lab Adv Informat Network, Beijing 100876, Peoples R China
[2] Beijing Univ Posts & Telecommun, Beijing Key Lab Network Syst Architecture & Conve, Beijing 100876, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Mobile edge computing; resource allocation; software defined cellular networks; task offloading; wireless networks;
D O I
10.1109/JCN.2019.000046
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid growth of the internet usage and the distributed computing resources of edge devices create a necessity to have a reasonable controller to ensure efficient utilization of distributed computing resources in mobile edge computing (MEC). We envision the future MEC services, where quality of experience (QoE) of the services is further enhanced by software defined networks (SDNs) capabilities to reduce the application-level response time without service disruptions. SDN, which is not proposed specifically for edge computing, can in fact serve as an enabler to lower the complexity barriers involved and let the real potential of edge computing be achieved. In this paper, we investigate the task offloading and resource allocation problem in wireless MEC aiming to minimize the delay while saving the battery power of user device simultaneously. However, it is challenging to obtain an optimal policy in such a dynamic task offloading system. Learning from experience plays a vital role in time variant dynamic systems where reinforcement learning (RL) takes a long term goal into consideration besides immediate reward, which is very important for a dynamic environment. A novel software defined edge cloudlet (SDEC) based RL optimization framework is proposed to tackle the task offloading and resource allocation in wireless MEC. Specifically, Q-learning and cooperative Q-learning based reinforcement learning schemes are proposed for the intractable problem. Simulation results show that the proposed scheme achieves 31.39% and 62.10% reduction on the sum delay compared to other benchmark methods such as traditional Q-learning with a random algorithm and Q-learning with epsilon greedy.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 38 条
[1]  
[Anonymous], 2017, ARXIV170205309
[2]  
[Anonymous], 2017, MOBILE EDGE COMPUTIN
[3]  
[Anonymous], C11741490 CISC
[4]  
[Anonymous], 2007, P IEEE INT C PAR DIS
[5]  
[Anonymous], 2011, P 9 INT C MOB SYST A
[6]  
[Anonymous], THESIS
[7]  
[Anonymous], CISC VIS NETW IND GL
[8]   Mobile Edge Computing Empowers Internet of Things [J].
Ansari, Nirwan ;
Sun, Xiang .
IEICE TRANSACTIONS ON COMMUNICATIONS, 2018, E101B (03) :604-619
[9]   How Can Edge Computing Benefit From Software-Defined Networking: A Survey, Use Cases, and Future Directions [J].
Baktir, Ahmet Cihat ;
Ozgovde, Atay ;
Ersoy, Cem .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2017, 19 (04) :2359-2391
[10]  
Bonomi F., 2012, MCC WORKSH MOB CLOUD, DOI [10.1145/2342509.2342513, DOI 10.1145/2342509.2342513]