A reinforcement learning-based computing offloading and resource allocation scheme in F-RAN

被引:0
作者
Fan Jiang
Rongxin Ma
Youjun Gao
Zesheng Gu
机构
[1] Xi’an University of Posts and Telecommunications,Shaanxi Key Laboratory of Information Communication Network and Security
[2] China Mobile System Integration Co.,undefined
[3] Ltd.,undefined
来源
EURASIP Journal on Advances in Signal Processing | / 2021卷
关键词
Fog radio access networks; Computing offloading; Resource allocation; Deep reinforcement learning; Dueling deep Q-network; Deep Q-network;
D O I
暂无
中图分类号
学科分类号
摘要
This paper investigates a computing offloading policy and the allocation of computational resource for multiple user equipments (UEs) in device-to-device (D2D)-aided fog radio access networks (F-RANs). Concerning the dynamically changing wireless environment where the channel state information (CSI) is difficult to predict and know exactly, we formulate the problem of task offloading and resource optimization as a mixed-integer nonlinear programming problem to maximize the total utility of all UEs. Concerning the non-convex property of the formulated problem, we decouple the original problem into two phases to solve. Firstly, a centralized deep reinforcement learning (DRL) algorithm called dueling deep Q-network (DDQN) is utilized to obtain the most suitable offloading mode for each UE. Particularly, to reduce the complexity of the proposed offloading scheme-based DDQN algorithm, a pre-processing procedure is adopted. Then, a distributed deep Q-network (DQN) algorithm based on the training result of the DDQN algorithm is further proposed to allocate the appropriate computational resource for each UE. Combining these two phases, the optimal offloading policy and resource allocation for each UE are finally achieved. Simulation results demonstrate the performance gains of the proposed scheme compared with other existing baseline schemes.
引用
收藏
相关论文
共 81 条
[1]  
Lan Y(2019)Task caching, offloading, and resource allocation in D2D-aided fog computing networks IEEE Access 7 104876-104891
[2]  
Wang X(2020)An online learning approach to computation offloading in dynamic fog networks IEEE Internet Things J 8 108310-108323
[3]  
Wang D(2020)Joint allocation on communication and computing resources for fog radio access networks IEEE Access 17 6790-6805
[4]  
Liu Z(2018)Multi-user multi-task offloading and resource allocation in mobile cloud systems IEEE Trans Wirel Commun 16 32-41
[5]  
Yang M(2019)Energy-efficient computation offloading and resource allocation in fog computing for internet of everything China Commun 7 104876-104891
[6]  
Zhu H(2019)Task caching, offloading, and resource allocation in D2D-aided fog computing networks IEEE Access 7 179349-179363
[7]  
Wang H(2019)Joint optimization of data offloading and resource allocation with renewable energy aware for IoT devices: a deep reinforcement learning approach IEEE Access 6 4005-4018
[8]  
Koucheryavy Y(2019)Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning IEEE Internet Things J 8 118192-118204
[9]  
Samouylov K(2020)Heterogeneous task offloading and resource allocations via deep recurrent reinforcement learning in partial observable multi-fog networks IEEE Internet Things J 7 97505-97514
[10]  
Qian H(2020)Semi-online computational offloading by dueling deep-Q network for user behavior prediction IEEE Access 6 19324-19337