A Deep Reinforcement Learning Scheme for SCMA-Based Edge Computing in IoT Networks

被引:2
|
作者
Liu, Pengtao [1 ]
Lei, Jing [1 ]
Liu, Wei [1 ]
机构
[1] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 410073, Peoples R China
基金
中国国家自然科学基金;
关键词
Sparse Code Multiple Access (SCMA); Multi-Access Edge Computing (MEC); Deep Reinforcement Learning (DRL); computation offloading; resource allocation; RESOURCE-ALLOCATION; RATE MAXIMIZATION; MULTIPLE-ACCESS; NOMA;
D O I
10.1109/GLOBECOM48099.2022.10001088
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The application of sparse code multiple access (SCMA) to multi-access edge computing (MEC) networks can provide massive connections as well as timely and efficient computation services for resource-constrained Internet of Things (IoT) devices. This paper investigates the maximization of computation rate in SCMA-MEC networks under a dynamic environment. We first formulate an initial optimization problem to maximize the long-term computation rate of IoT devices under task delay constraints. Then, a joint computation offloading and SCMA resource allocation algorithm based on long short-term memory (LSTM) network and dueling deep Q network (DQN) is proposed. Specifically, each IoT device acts as an agent in the algorithm. Since each device can only observe part of the environment state, the LSTM network is used to predict the states of other devices. The computation rate of devices is taken as a reward to conduct action exploration in dueling DQN, and then the near-optimal computation offloading decision, SCMA codebook allocation, and power distribution of IoT users are obtained after training. Numerical simulation results demonstrate that the proposed algorithm can achieve higher computation rate compared with other baseline schemes.
引用
收藏
页码:5044 / 5049
页数:6
相关论文
共 50 条
  • [31] Joint Caching and Computing Service Placement for Edge-Enabled IoT Based on Deep Reinforcement Learning
    Chen, Yan
    Sun, Yanjing
    Yang, Bin
    Taleb, Tarik
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) : 19501 - 19514
  • [32] A Deep Reinforcement Learning Based Offloading Game in Edge Computing
    Zhan, Yufeng
    Guo, Song
    Li, Peng
    Zhang, Jiang
    IEEE TRANSACTIONS ON COMPUTERS, 2020, 69 (06) : 883 - 893
  • [33] Computation Offloading in Edge Computing Based on Deep Reinforcement Learning
    Li, MingChu
    Mao, Ning
    Zheng, Xiao
    Gadekallu, Thippa Reddy
    PROCEEDINGS OF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION NETWORKS (ICCCN 2021), 2022, 394 : 339 - 353
  • [34] Deep Reinforcement Learning based Energy Scheduling for Edge Computing
    Yang, Qinglin
    Li, Peng
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 175 - 180
  • [35] Edge computing dynamic unloading based on deep reinforcement learning
    Kan, Jicheng
    Cai, Jiajing
    PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND COMPUTER ENGINEERING, EITCE 2023, 2023, : 937 - 944
  • [36] Deep reinforcement learning based edge computing for video processing
    Han, Seung-Yeop
    Lee, Hyang-Won
    ICT EXPRESS, 2023, 9 (03): : 433 - 438
  • [37] Computing resource allocation scheme of IOV using deep reinforcement learning in edge computing environment
    Yiwei Zhang
    Min Zhang
    Caixia Fan
    Fuqiang Li
    Baofang Li
    EURASIP Journal on Advances in Signal Processing, 2021
  • [38] Computing resource allocation scheme of IOV using deep reinforcement learning in edge computing environment
    Zhang, Yiwei
    Zhang, Min
    Fan, Caixia
    Li, Fuqiang
    Li, Baofang
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2021, 2021 (01)
  • [39] Deep reinforcement learning-based online task offloading in mobile edge computing networks
    Wu, Haixing
    Geng, Jingwei
    Bai, Xiaojun
    Jin, Shunfu
    INFORMATION SCIENCES, 2024, 654
  • [40] Deep-Reinforcement-Learning-Based IRS for Cooperative Jamming Networks Under Edge Computing
    Li, Tengyue
    Wen, Hong
    Jiang, Yixin
    Tang, Jie
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (10): : 8996 - 9006