Double RISs assisted task offloading for NOMA-MEC with action-constrained deep reinforcement learning

被引:2
作者
Fang, Junli [1 ]
Lu, Baoshan [2 ]
Hong, Xuemin [1 ]
Shi, Jianghong [1 ]
机构
[1] Xiamen Univ, Dept Informat & Commun Engn, Xiamen, Fujian, Peoples R China
[2] Guangxi Normal Univ, Sch Elect & Informat Engn, Guangxi Key Lab Brain Inspired Comp & Intelligent, Guilin, Peoples R China
关键词
Mobile edge computing; Non-orthogonal multiple access; Reconfigurable intelligent surface; Deep reinforcement learning; Twin delayed deep deterministic policy; gradient; EDGE COMPUTING NETWORKS; RESOURCE-ALLOCATION; MAXIMIZATION; INTERNET; SYSTEMS;
D O I
10.1016/j.knosys.2023.111307
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reconfigurable intelligent surface (RIS) is expected to enhance task offloading performance in non-line-of -sight mobile edge computing (MEC) scenarios. This paper aims at reducing the long-term energy consumption for task offloading in a MEC system assisted by double RISs while considering non-orthogonal multiple access (NOMA). Due to the non-convexity of the formulated problem, we propose an action-constrained deep reinforcement learning (DRL) framework based on twin delayed deep deterministic policy gradient (TD3) algorithm that includes inner and outer optimization processes, which greatly reduces the action space size of the DRL agent, making it easy to implement and achieve fast convergence. During the inner optimization phase, by utilizing theoretical derivations, we propose a low-complexity method to optimally derive the transmit power, the local computing frequency, and the computation resource allocation in the base station. In the outer optimization phase, building on the solution derived from the inner optimization, we further use the TD3 algorithm to jointly determine the phase shifting of RIS elements, the offloading ratio, and the transmission time for each time slot. Experiment results demonstrate that the proposed algorithm achieves rapid convergence performance. In comparison to single RIS assisted offloading, the double RISs assisted offloading scheme proposed in this paper can reduce energy consumption by 42.8% on average.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Computing Offloading Based on TD3 Algorithm in Cache-Assisted Vehicular NOMA-MEC Networks
    Zhou, Tianqing
    Xu, Ming
    Qin, Dong
    Nie, Xuefang
    Li, Xuan
    Li, Chunguo
    [J]. SENSORS, 2023, 23 (22)
  • [22] Deep Reinforcement Learning for Task Offloading in Edge Computing Assisted Power IoT
    Hu, Jiangyi
    Li, Yang
    Zhao, Gaofeng
    Xu, Bo
    Ni, Yiyang
    Zhao, Haitao
    [J]. IEEE ACCESS, 2021, 9 : 93892 - 93901
  • [23] Decentralized computation offloading via multi-agent deep reinforcement learning for NOMA-assisted mobile edge computing with energy harvesting devices
    Daghayeghi, Atousa
    Nickray, Mohsen
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 151
  • [24] Privacy-Preserved Task Offloading in Mobile Blockchain With Deep Reinforcement Learning
    Nguyen, Dinh C.
    Pathirana, Pubudu N.
    Ding, Ming
    Seneviratne, Aruna
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2020, 17 (04): : 2536 - 2549
  • [25] Fuzzy Logic-based Deep Reinforcement Learning Pre-training: A Case Study in MEC Task Offloading
    Zhang, Jiujie
    Lin, Yangfei
    Du, Zhaoyang
    Bao, Wugedele
    [J]. 2024 IEEE CYBER SCIENCE AND TECHNOLOGY CONGRESS, CYBERSCITECH 2024, 2024, : 23 - 30
  • [26] Federated Deep Reinforcement Learning for Online Task Offloading and Resource Allocation in WPC-MEC Networks
    Zang, Lianqi
    Zhang, Xin
    Guo, Boren
    [J]. IEEE ACCESS, 2022, 10 : 9856 - 9867
  • [27] Federated deep reinforcement learning for task offloading and resource allocation in mobile edge computing-assisted vehicular networks
    Zhao, Xu
    Wu, Yichuan
    Zhao, Tianhao
    Wang, Feiyu
    Li, Maozhen
    [J]. JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2024, 229
  • [28] Reinforcement Learning-based Task Offloading of MEC-assisted UAVs in Precision Agriculture
    Yang, Zih-Yi
    Chiu, Te-Chuan
    Sheu, Jang-Ping
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5587 - 5592
  • [29] Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems
    Tang, Ming
    Wong, Vincent W. S.
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) : 1985 - 1997
  • [30] Asynchronous Deep Reinforcement Learning for Data-Driven Task Offloading in MEC-Empowered Vehicular Networks
    Dai, Penglin
    Hu, Kaiwen
    Wu, Xiao
    Xing, Huanlai
    Yu, Zhaofei
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,