Improved Double Deep Q Network-Based Task Scheduling Algorithm in Edge Computing for Makespan Optimization

被引:24
|
作者
Zeng, Lei [1 ]
Liu, Qi [2 ]
Shen, Shigen [3 ]
Liu, Xiaodong [4 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp Sci, Nanjing 210044, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Sch Software, Nanjing 210044, Peoples R China
[3] Huzhou Univ, Sch Informat Engn, Huzhou 313000, Peoples R China
[4] Edinburgh Napier Univ, Sch Comp, Edinburgh EH10 5DT, Scotland
来源
TSINGHUA SCIENCE AND TECHNOLOGY | 2024年 / 29卷 / 03期
基金
中国国家社会科学基金; 中国国家自然科学基金;
关键词
edge computing; task scheduling; reinforcement learning; makespan; Double Deep Q Network (DQN);
D O I
10.26599/TST.2023.9010058
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Edge computing nodes undertake an increasing number of tasks with the rise of business density. Therefore, how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge. This study proposes an edge task scheduling approach based on an improved Double Deep Q Network (DQN), which is adopted to separate the calculations of target Q values and the selection of the action in two networks. A new reward function is designed, and a control unit is added to the experience replay unit of the agent. The management of experience data are also modified to fully utilize its value and improve learning efficiency. Reinforcement learning agents usually learn from an ignorant state, which is inefficient. As such, this study proposes a novel particle swarm optimization algorithm with an improved fitness function, which can generate optimal solutions for task scheduling. These optimized solutions are provided for the agent to pre-train network parameters to obtain a better cognition level. The proposed algorithm is compared with six other methods in simulation experiments. Results show that the proposed algorithm outperforms other benchmark methods regarding makespan.
引用
收藏
页码:806 / 817
页数:12
相关论文
empty
未找到相关数据