Q-Learning Algorithm for Joint Computation Offloading and Resource Allocation in Edge Cloud

被引:0
作者
Dab, Boutheina [1 ]
Aitsaadi, Nadjib [2 ]
Langar, Rami [3 ]
机构
[1] UPMC, Sorbonne Univ, CNRS, LIP6, F-75005 Paris, France
[2] Univ Paris Est, LIGM, CNRS, UMR 8049,LiSSi,EA 3956,ESIEE Paris, F-93160 Noisy Le Grand, France
[3] Univ Paris Est, LIGM, CNRS, UMR 8049, F-77420 Champs Sur Marne, France
来源
2019 IFIP/IEEE SYMPOSIUM ON INTEGRATED NETWORK AND SERVICE MANAGEMENT (IM) | 2019年
关键词
Mobile Edge Computing; offloading; reinforcement learning; optimization; resource allocation; IEEE; 802.11ac;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The advent of 5 G technology along with the high proliferation of mobile devices entail an explosion of mobile traffic. Due to their resource-limitation constraint, mobile devices resort to connect to Cloud servers so as to offload computational tasks and improve, hence, resource usage. Unfortunately, the conventional Mobile Cloud Computing (MCC) solution involves high transmission latency. Recently Mobile Edge Computing (MEC) is envisioned as a promising technique for enhancing the computation capacities of mobiles and reducing latency. The key insight of MEC is to push mobile computing and storage to the network edge (i.e., base stations and access points). The main challenge of MEC solution is to find an efficient assignment of tasks with local or edge devices, while minimizing energy consumption and latency. In this paper, we propose a new joint task assignment and resource allocation approach in a multiuser WiFi-based MEC architecture. The main novelty of our work is that optimal offloading decision is jointly performed with the radio resource allocation. The objective of our scheme is to minimize the energy consumption on the mobile terminal side under the application latency constraint. To do so, we first formulate the problem as a new online Reinforcement Learning problem while considering both delay and device computation constraints. Then, we propose a new strategy based on a Q-Learning algorithm, named QL-Joint Task Assignment and Resource Allocation (QL-JTAR) to solve it. Based on extensive simulations conducted in NS3 simulator and using real input traces, we show that our approach outperforms the related prominent strategies in terms of energy consumption and delay, while ensuring near-optimal solution.
引用
收藏
页数:8
相关论文
共 13 条
[1]  
[Anonymous], 2017, IEEE T MOBILE COMPUT
[2]  
[Anonymous], 2016, Tech. Rep.,
[3]  
[Anonymous], 2010, ACM MOBISYS
[4]  
[Anonymous], 2009, IEEE PERVASIVE COMPU
[5]  
Ceselli A., 2017, IEEE T NETWORKING
[6]  
Dinh H. T., 2013, WIRELESS COMMUNICATI
[7]  
Dinh T.-Q., 2017, P IEEE WIR COMM NETW, DOI DOI 10.1109/WCNC.2017.7925612
[8]  
Gosavi A., 2017, TECH REP
[9]  
Gosavi A., 2004, EUROPEAN J OPERATION
[10]  
Jiang Z., 2015, IEEE ACCESS