A joint task caching and computation offloading scheme based on deep reinforcement learning

被引:2
作者
Tian, Huizi [1 ]
Zhu, Lin [1 ]
Tan, Long [1 ]
机构
[1] Heilongjiang Univ, Dept Comp Sci & Technol, Xuefu Rd, Harbin 150080, Heilongjiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Mobile edge computing; Internet of vehicles; Deep reinforcement learning; Content caching; Task offloading; EDGE; INTERNET; STRATEGY;
D O I
10.1007/s12083-024-01836-2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Considering the dynamic variability of the vehicular edge environment and the limited edge servers resources, this paper proposes a joint task caching and computation offloading scheme based on deep reinforcement learning (DRL). Considering that the motion trajectories of different vehicles overlap and their task requests may be the same, this paper designs a vehicle-edge-cloud computing framework to fully use the cache resources of vehicles, edge servers, and clouds to reduce task processing delays and energy consumption. Secondly, this paper adopts partial offloading and collaboration between edge servers to make full use of the computational resources of vehicles, edge servers and clouds, avoiding the waste of resources and reducing the burden of vehicles and edge servers. In addition, this paper proposes a DRL-based task offloading scheme to obtain better task caching and offloading strategies. The simulation results show that the scheme proposed in this article performs better compared to other schemes and effectively reduces the latency and energy consumption of task processing.
引用
收藏
页码:26 / 26
页数:1
相关论文
共 44 条
[11]   A Communication-Efficient Hierarchical Federated Learning Framework via Shaping Data Distribution at Edge [J].
Deng, Yongheng ;
Lyu, Feng ;
Xia, Tengxi ;
Zhou, Yuezhi ;
Zhang, Yaoxue ;
Ren, Ju ;
Yang, Yuanyuan .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (03) :2600-2615
[12]   A survey on deep learning and its applications [J].
Dong, Shi ;
Wang, Ping ;
Abbas, Khushnood .
COMPUTER SCIENCE REVIEW, 2021, 40
[13]   A survey on multi-agent deep reinforcement learning: from the perspective of challenges and applications [J].
Du, Wei ;
Ding, Shifei .
ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (05) :3215-3238
[14]   Distributed Artificial Intelligence Empowered by End-Edge-Cloud Computing: A Survey [J].
Duan, Sijing ;
Wang, Dan ;
Ren, Ju ;
Lyu, Feng ;
Zhang, Ye ;
Wu, Huaqing ;
Shen, Xuemin .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (01) :591-624
[15]  
Guozhi Zhang, 2021, 2021 7th International Conference on Big Data Computing and Communications (BigCom), P14, DOI 10.1109/BigCom53800.2021.00039
[16]   Common Structures in Resource Management as Driver for Reinforcement Learning: A Survey and Research Tracks [J].
Jin, Yue ;
Kostadinov, Dimitre ;
Bouzid, Makram ;
Aghasaryan, Armen .
MACHINE LEARNING FOR NETWORKING, 2019, 11407 :117-132
[17]   Edge computing: A survey [J].
Khan, Wazir Zada ;
Ahmed, Ejaz ;
Hakak, Saqib ;
Yaqoob, Ibrar ;
Ahmed, Arif .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2019, 97 :219-235
[18]   Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning [J].
Le Thanh Tan ;
Hu, Rose Qingyang .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (11) :10190-10203
[19]   A Federated Learning-Based Edge Caching Approach for Mobile Edge Computing-Enabled Intelligent Connected Vehicles [J].
Li, Chunlin ;
Zhang, Yong ;
Luo, Youlong .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (03) :3360-3369
[20]   Joint Task Offloading and Cache Placement for Energy-Efficient Mobile Edge Computing Systems [J].
Liang, Jingxuan ;
Xing, Hong ;
Wang, Feng ;
Lau, Vincent K. N. .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2023, 12 (04) :694-698