Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning

被引:3
|
作者
Rotaeche, Ramon [1 ]
Ballesteros, Alberto [1 ]
Proenza, Julian [1 ]
机构
[1] Univ Illes Balears, Dept Matematiques & Informat, Palma De Mallorca 07122, Spain
关键词
Deep Reinforcement Learning; Distributed Embedded Systems; combinatorial optimization; Machine Learning;
D O I
10.3390/s23010548
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
A Critical Adaptive Distributed Embedded System (CADES) is a group of interconnected nodes that must carry out a set of tasks to achieve a common goal, while fulfilling several requirements associated with their critical (e.g., hard real-time requirements) and adaptive nature. In these systems, a key challenge is to solve, in a timely manner, the combinatorial optimization problem involved in finding the best way to allocate the tasks to the available nodes (i.e., the task allocation) taking into account aspects such as the computational costs of the tasks and the computational capacity of the nodes. This problem is not trivial and there is no known polynomial time algorithm to find the optimal solution. Several studies have proposed Deep Reinforcement Learning (DRL) approaches to solve combinatorial optimization problems and, in this work, we explore the application of such approaches to the task allocation problem in CADESs. We first discuss the potential advantages of using a DRL-based approach over several heuristic-based approaches to allocate tasks in CADESs and we then demonstrate how a DRL-based approach can achieve similar results for the best performing heuristic in terms of optimality of the allocation, while requiring less time to generate such allocation.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] Adaptive and Efficient Resource Allocation in Cloud Datacenters Using Actor-Critic Deep Reinforcement Learning
    Chen, Zheyi
    Hu, Jia
    Min, Geyong
    Luo, Chunbo
    El-Ghazawi, Tarek
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (08) : 1911 - 1923
  • [22] A Dynamic Adaptive Jamming Power Allocation Method Based on Deep Reinforcement Learning
    Peng X.
    Xu H.
    Jiang L.
    Zhang Y.
    Rao N.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2023, 51 (05): : 1223 - 1234
  • [23] Adaptive workload adjustment for cyber-physical systems using deep reinforcement learning
    Xu, Shikang
    Koren, Israel
    Krishna, C. Mani
    SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2021, 30
  • [24] Simultaneous task and energy planning using deep reinforcement learning
    Wang, Di
    Hu, Mengqi
    Weir, Jeffery D.
    INFORMATION SCIENCES, 2022, 607 : 931 - 946
  • [25] Near-Optimal Vehicular Crowdsensing Task Allocation Empowered by Deep Reinforcement Learning
    Xiang C.-C.
    Li Y.-Y.
    Feng L.
    Chen C.
    Guo S.-T.
    Yang P.-L.
    Jisuanji Xuebao/Chinese Journal of Computers, 2022, 45 (05): : 918 - 934
  • [26] Assembly task allocation of human-robot collaboration based on deep reinforcement learning
    Xiong Z.
    Chen H.
    Wang C.
    Yue M.
    Hou W.
    Xu B.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2023, 29 (03): : 789 - 800
  • [27] Task Allocation of Multiple Unmanned Aerial Vehicles Based on Deep Transfer Reinforcement Learning
    Yin, Yongfeng
    Guo, Yang
    Su, Qingran
    Wang, Zhetao
    DRONES, 2022, 6 (08)
  • [28] Federated Deep Reinforcement Learning for Multimedia Task Offloading and Resource Allocation in MEC Networks
    Zhang, Rongqi
    Pan, Chunyun
    Wang, Yafei
    Yao, Yuanyuan
    Li, Xuehua
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2024, E107B (06) : 446 - 457
  • [29] Deep reinforcement learning path planning and task allocation for multi-robot collaboration
    Li, Zhixian
    Shi, Nianfeng
    Zhao, Liguo
    Zhang, Mengxia
    ALEXANDRIA ENGINEERING JOURNAL, 2024, 109 : 408 - 423
  • [30] Adaptive Task Offloading in Coded Edge Computing: A Deep Reinforcement Learning Approach
    Nguyen Van Tam
    Nguyen Quang Hieu
    Nguyen Thi Thanh Van
    Nguyen Cong Luong
    Niyato, Dusit
    Kim, Dong In
    IEEE COMMUNICATIONS LETTERS, 2021, 25 (12) : 3878 - 3882