Fast Adaptive Task Offloading in Edge Computing Based on Meta Reinforcement Learning

被引:287
作者
Wang, Jin [1 ]
Hu, Jia [1 ]
Min, Geyong [1 ]
Zomaya, Albert Y. [2 ]
Georgalas, Nektarios [3 ]
机构
[1] Univ Exeter, Dept Comp Sci, Exeter EX4 4PY, Devon, England
[2] Univ Sydney, Sch Informat Technol, Sydney, NSW 2006, Australia
[3] British Telecommun PLC, Dept Appl Res, Edinburgh EH12, Midlothian, Scotland
关键词
Task analysis; Training; Neural networks; Heuristic algorithms; Mobile applications; Learning (artificial intelligence); Edge computing; Multi-access edge computing; task offloading; meta reinforcement learning; deep learning; MOBILE; ALGORITHM;
D O I
10.1109/TPDS.2020.3014896
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Multi-access edge computing (MEC) aims to extend cloud service to the network edge to reduce network traffic and service latency. A fundamental problem in MEC is how to efficiently offload heterogeneous tasks of mobile applications from user equipment (UE) to MEC hosts. Recently, many deep reinforcement learning (DRL)-based methods have been proposed to learn offloading policies through interacting with the MEC environment that consists of UE, wireless channels, and MEC hosts. However, these methods have weak adaptability to new environments because they have low sample efficiency and need full retraining to learn updated policies for new environments. To overcome this weakness, we propose a task offloading method based on meta reinforcement learning, which can adapt fast to new environments with a small number of gradient updates and samples. We model mobile applications as Directed Acyclic Graphs (DAGs) and the offloading policy by a custom sequence-to-sequence (seq2seq) neural network. To efficiently train the seq2seq network, we propose a method that synergizes the first order approximation and clipped surrogate objective. The experimental results demonstrate that this new offloading method can reduce the latency by up to 25 percent compared to three baselines while being able to adapt fast to new environments.
引用
收藏
页码:242 / 253
页数:12
相关论文
共 40 条
[11]   Learning for Computation Offloading in Mobile Edge Computing [J].
Dinh, Thinh Quang ;
La, Quang Duy ;
Quek, Tony Q. S. ;
Shin, Hyundong .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2018, 66 (12) :6353-6367
[12]  
Finn C, 2017, PR MACH LEARN RES, V70
[13]   QoE-Aware Computation Offloading to Capture Energy-Latency-Pricing Tradeoff in Mobile Clouds [J].
Hong, Sung-Tae ;
Kim, Hyoil .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2019, 18 (09) :2174-2189
[14]   Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks [J].
Huang, Liang ;
Bi, Suzhi ;
Zhang, Ying-Jun Angela .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (11) :2581-2593
[15]   Deep reinforcement learning-based joint task offloading and bandwidth allocation for multi-user mobile edge computing [J].
Huang, Liang ;
Feng, Xu ;
Zhang, Cheng ;
Qian, Liping ;
Wu, Yuan .
DIGITAL COMMUNICATIONS AND NETWORKS, 2019, 5 (01) :10-17
[16]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001
[17]   Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning [J].
Le Thanh Tan ;
Hu, Rose Qingyang .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (11) :10190-10203
[18]  
Lillicrap T. P., 2015, PROC INT C LEARNING
[19]   Task Scheduling with Dynamic Voltage and Frequency Scaling for Energy Minimization in the Mobile Cloud Computing Environment [J].
Lin, Xue ;
Wang, Yanzhi ;
Xie, Qing ;
Pedram, Massoud .
IEEE TRANSACTIONS ON SERVICES COMPUTING, 2015, 8 (02) :175-186
[20]   Optimal Joint Scheduling and Cloud Offloading for Mobile Applications [J].
Mahmoodi, S. Eman ;
Uma, R. N. ;
Subbalakshmi, K. P. .
IEEE TRANSACTIONS ON CLOUD COMPUTING, 2019, 7 (02) :301-313