Adaptive Task Offloading in Coded Edge Computing: A Deep Reinforcement Learning Approach

被引:10
作者
Nguyen Van Tam [1 ,2 ]
Nguyen Quang Hieu [3 ]
Nguyen Thi Thanh Van [4 ]
Nguyen Cong Luong [1 ,2 ]
Niyato, Dusit [3 ]
Kim, Dong In [5 ]
机构
[1] Phenikaa Univ, Fac Comp Sci, Hanoi 12116, Vietnam
[2] A&A Phoenix Grp JSC, Phenikaa Res & Technol Inst PRATT, Hanoi 11313, Vietnam
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[4] Phenikaa Univ, Fac Elect & Elect Engn, Hanoi 12116, Vietnam
[5] Sungkyunkwan Univ, Dept Elect & Comp Engn, Suwon 16419, South Korea
基金
新加坡国家研究基金会;
关键词
Task analysis; Costs; Reinforcement learning; Codes; Edge computing; Partitioning algorithms; Optimization; Maximum distance separable code; coded edge computing; deep reinforcement learning;
D O I
10.1109/LCOMM.2021.3116036
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
In this letter, we consider a Coded Edge Computing (CEC) network in which a client encodes its computation subtasks using the Maximum Distance Separable (MDS) code before offloading them to helpers. The CEC network is heterogeneous in which the helpers are different in computing capacity, wireless communication stability, and computing price. Thus, the client needs to determine a desirable size of MDS-coded subtasks and selects proper helpers such that the computation latency is within the deadline and the incentive cost is minimal. This problem is challenging since the helpers are generally dynamic and random in the computing, communication, and computing price. We thus propose to adopt a Deep Reinforcement Learning (DRL) algorithm that allows the client to learn and find optimal decisions without any prior knowledge of network environments. The experiment results reveal that the proposed algorithm outperforms the standard Q-learning and baseline algorithms in both terms of computation latency and incentive cost.
引用
收藏
页码:3878 / 3882
页数:5
相关论文
共 9 条
[1]   Investigation on the Properties of PMMA/Reactive Halloysite Nanocomposites Based on Halloysite with Double Bonds [J].
Chen, Shiwei ;
Yang, Zhizhou ;
Wang, Fuzhong .
POLYMERS, 2018, 10 (08)
[2]  
Haynes W., 2013, Probability distributions, P1752
[3]  
Kumar SMD, 2014, 2014 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI), P1210, DOI 10.1109/ICACCI.2014.6968277
[4]   Speeding Up Distributed Machine Learning Using Codes [J].
Lee, Kangwook ;
Lam, Maximilian ;
Pedarsani, Ramtin ;
Papailiopoulos, Dimitris ;
Ramchandran, Kannan .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (03) :1514-1529
[5]  
Li HJ, 2017, ICCAD-IEEE ACM INT, P847, DOI 10.1109/ICCAD.2017.8203866
[6]  
Park H, 2018, IEEE INT SYMP INFO, P1630, DOI 10.1109/ISIT.2018.8437669
[7]   Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approach [J].
Tran The Anh ;
Nguyen Cong Luong ;
Niyato, Dusit ;
Kim, Dong In ;
Wang, Li-Chun .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2019, 8 (05) :1345-1348
[8]  
WATKINS CJCH, 1992, MACH LEARN, V8, P279, DOI 10.1007/BF00992698
[9]   A Node-Selection-Based Sub-Task Assignment Method for Coded Edge Computing [J].
Zhao, Shancheng .
IEEE COMMUNICATIONS LETTERS, 2019, 23 (05) :797-801