Deep reinforcement learning-based task scheduling and resource allocation for NOMA-MEC in Industrial Internet of Things

被引:19
作者
Lin, Lixia [1 ]
Zhou, Wen'an [1 ]
Yang, Zhicheng [1 ]
Liu, Jianlong [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Xitucheng Rd 10th, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
Mobile edge computing; Non-Orthogonal Multiple Access; Delay-sensitive; Industrial internet of things; Prediction-based deep reinforcement learning; NONORTHOGONAL MULTIPLE-ACCESS; ENERGY-CONSUMPTION; EDGE; NETWORKS; MINIMIZATION; SYSTEMS;
D O I
10.1007/s12083-022-01348-x
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) and Non-Orthogonal Multiple Access (NOMA) have been treated as promising technologies to process the delay-sensitive tasks in the Industrial Internet of Things (IIoT) network. The cooperation among multiple MEC servers is essential to improve the processing capacity of MEC systems. However, the dynamic IIoT environment with unknown changing models, including time-varying wireless channels, diversified task requests, and dynamic load on wireless resources and multiple MEC servers, may continuously affect the task offloading decision and NOMA user pairing, which brings great challenges to the resource management in the NOMA-MEC-based IIoT network. In order to solve this problem, we design a distributed deep reinforcement learning (DRL) based solution to improve the task satisfaction ratio by jointly optimizing the task offloading decision and the sub-channel assignment to support the binary computing offloading policy. For each IIoT device agent, to deal with the problem of partial state observability, the Recurrent Neural Network (RNN) is employed to predict the load states of sub-channels and MEC servers, which is further used for the decision of the RL agent. Simulation results show that the proposed prediction-based-DRL (P-DRL) method can achieve higher task satisfaction ratio than exiting schemes.
引用
收藏
页码:170 / 188
页数:19
相关论文
共 48 条
[31]   NOMA Assisted Multi-Task Multi-Access Mobile Edge Computing via Deep Reinforcement Learning for Industrial Internet of Things [J].
Qian, Liping ;
Wu, Yuan ;
Jiang, Fuli ;
Yu, Ningning ;
Lu, Weidang ;
Lin, Bin .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (08) :5688-5698
[32]   Edge Computing in Industrial Internet of Things: Architecture, Advances and Challenges [J].
Qiu, Tie ;
Chi, Jiancheng ;
Zhou, Xiaobo ;
Ning, Zhaolong ;
Atiquzzaman, Mohammed ;
Wu, Dapeng Oliver .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2020, 22 (04) :2462-2488
[33]   A game-theoretic joint optimal pricing and resource allocation for Mobile Edge Computing in NOMA-based 5G networks and beyond [J].
Roostaei, Razie ;
Dabiri, Zahra ;
Movahedi, Zeinab .
COMPUTER NETWORKS, 2021, 198
[34]   Computation Energy Efficiency Maximization for a NOMA-Based WPT-MEC Network [J].
Shi, Liqin ;
Ye, Yinghui ;
Chu, Xiaoli ;
Lu, Guangyue .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (13) :10731-10744
[35]   Energy-Efficient Joint Task Offloading and Resource Allocation in OFDMA-Based Collaborative Edge Computing [J].
Tan, Lin ;
Kuang, Zhufang ;
Zhao, Lian ;
Liu, Anfeng .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (03) :1960-1972
[36]   Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems [J].
Tang, Ming ;
Wong, Vincent W. S. .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) :1985-1997
[37]   Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks [J].
Tran, Tuyen X. ;
Pompili, Dario .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (01) :856-868
[38]  
Tuong VD, 2021, IEEE T IND INF, V1-10
[39]   Joint computation offloading and resource allocation for NOMA-based multi-access mobile edge computing systems [J].
Wan, Zhilan ;
Xu, Ding ;
Xu, Dahu ;
Ahmad, Ishtiaq .
COMPUTER NETWORKS, 2021, 196
[40]   Stackelberg Game of Energy Consumption and Latency in MEC Systems With NOMA [J].
Wang, Kaidi ;
Ding, Zhiguo ;
So, Daniel K. C. ;
Karagiannidis, George K. .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (04) :2191-2206