共 50 条
Joint Resource Allocation and Cache Placement for Location-Aware Multi-User Mobile-Edge Computing
被引:21
作者:
Chen, Jiechen
[1
,2
]
Xing, Hong
[3
]
Lin, Xiaohui
[1
]
Nallanathan, Arumugam
[4
]
Bi, Suzhi
[1
,5
]
机构:
[1] Shenzhen Univ, Coll Elect & Informat Engn, Shenzhen 518060, Peoples R China
[2] Kings Coll London, Ctr Telecommun Res, Dept Engn, London, England
[3] Hong Kong Univ Sci & Technol Guangzhou, Internet Things Thrust, Guangzhou 511400, Peoples R China
[4] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
[5] Peng Cheng Lab, Shenzhen 518066, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Deep learning (DL);
mobile-edge computing (MEC);
resource allocation;
service caching;
OPTIMIZATION;
MAXIMIZATION;
ENERGY;
D O I:
10.1109/JIOT.2022.3196908
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
With the growing demand for latency-critical and computation-intensive Internet of Things (IoT) services, the IoT-oriented network architecture, mobile-edge computing (MEC), has emerged as a promising technique to reinforce the computation capability of the resource-constrained IoT devices. To exploit the cloud-like functions at the network edge, service caching has been implemented to reuse the computation task input/output data, thus effectively reducing the delay incurred by data retransmissions and repeated execution of the same task. In a multiuser cache-assisted MEC system, users' preferences for different types of services, possibly dependent on their locations, play an important role in the joint design of communication, computation, and service caching. In this article, we consider multiple representative locations, where users at the same location share the same preference profile for a given set of services. Specifically, by exploiting the location-aware users' preference profiles, we propose joint optimization of the binary cache placement, the edge computation resource, and the bandwidth (BW) allocation to minimize the expected sum-energy consumption, subject to the BW and the computation limitations as well as the service latency constraints. To effectively solve the mixed-integer nonconvex problem, we propose a deep learning (DL)-based offline cache placement scheme using a novel stochastic quantization-based discrete-action generation method. The proposed hybrid learning framework advocates both benefits from the model-free DL approach and the model-based optimization. The simulations verify that the proposed DL-based scheme saves roughly 33% and 6.69% of energy consumption compared with the greedy caching and the popular caching, respectively, while achieving up to 99.01% of the optimal performance.
引用
收藏
页码:25698 / 25714
页数:17
相关论文
共 50 条