Distributed Multi-Agent Approach for Achieving Energy Efficiency and Computational Offloading in MECNs Using Asynchronous Advantage Actor-Critic

被引:2
作者
Khan, Israr [1 ]
Raza, Salman [2 ]
Khan, Razaullah [3 ]
Rehman, Waheed ur [4 ]
Rahman, G. M. Shafiqur [5 ]
Tao, Xiaofeng [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Engn Res Ctr Mobile Network Technol, Beijing 100876, Peoples R China
[2] Natl Text Univ, Dept Comp Sci, Faisalabad 37610, Pakistan
[3] Univ Engn & Technol, Dept Comp Sci, Mardan 23200, Pakistan
[4] Univ Peshawar, Dept Comp Sci, Peshawar 25120, Pakistan
[5] Beijing Univ Posts & Telecommun, Key Lab Universal Wireless Commun, Minist Educ, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
deep reinforcement learning; advanced asynchronous advantage actor-critic (A3C); multi-agent system; mobile edge computing; cloud computing; computational offloading; energy efficiency; REINFORCEMENT; ALLOCATION; DESIGN;
D O I
10.3390/electronics12224605
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile edge computing networks (MECNs) based on hierarchical cloud computing have the ability to provide abundant resources to support the next-generation internet of things (IoT) network, which relies on artificial intelligence (AI). To address the instantaneous service and computation demands of IoT entities, AI-based solutions, particularly the deep reinforcement learning (DRL) strategy, have been intensively studied in both the academic and industrial fields. However, there are still many open challenges, namely, the lengthening convergence phenomena of the agent, network dynamics, resource diversity, and mode selection, which need to be tackled. A mixed integer non-linear fractional programming (MINLFP) problem is formulated to maximize computing and radio resources while maintaining quality of service (QoS) for every user's equipment. We adopt the advanced asynchronous advantage actor-critic (A3C) approach to take full advantage of distributed multi-agent-based solutions for achieving energy efficiency in MECNs. The proposed approach, which employs A3C for computing offloading and resource allocation, is shown through numerical results to significantly reduce energy consumption and improve energy efficiency. This method's effectiveness is further shown by comparing it to other benchmarks.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] DPU-Enhanced Multi-Agent Actor-Critic Algorithm for Cross-Domain Resource Scheduling in Computing Power Network
    Wang, Shuaichao
    Guo, Shaoyong
    Hao, Jiakai
    Ren, Yinlin
    Qi, Feng
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (06): : 6008 - 6025
  • [22] SACHA: Soft Actor-Critic With Heuristic-Based Attention for Partially Observable Multi-Agent Path Finding
    Lin, Qiushi
    Ma, Hang
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (08) : 5100 - 5107
  • [23] Real-Time Optimal Energy Management of Multimode Hybrid Electric Powertrain With Online Trainable Asynchronous Advantage Actor-Critic Algorithm
    Biswas, Atriya
    Anselma, Pier Giuseppe
    Emadi, Ali
    IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2022, 8 (02) : 2676 - 2694
  • [24] Blockchain-Enabled Secure Data Sharing Scheme in Mobile-Edge Computing: An Asynchronous Advantage Actor-Critic Learning Approach
    Liu, Lei
    Feng, Jie
    Pei, Qingqi
    Chen, Chen
    Ming, Yang
    Shang, Bodong
    Dong, Mianxiong
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (04): : 2342 - 2353
  • [25] Variations in Multi-Agent Actor-Critic Frameworks for Joint Optimizations in UAV Swarm Networks: Recent Evolution, Challenges, and Directions
    Alam, Muhammad Morshed
    Trina, Sayma Akter
    Hossain, Tamim
    Mahmood, Shafin
    Ahmed, Md. Sanim
    Arafat, Muhammad Yeasir
    DRONES, 2025, 9 (02)
  • [26] Efficient Resource Allocation for Multi-Beam Satellite-Terrestrial Vehicular Networks: A Multi-Agent Actor-Critic Method With Attention Mechanism
    He, Ying
    Wang, Yuhang
    Yu, F. Richard
    Lin, Qiuzhen
    Li, Jianqiang
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (03) : 2727 - 2738
  • [27] Optimized distributed formation control using identifier-critic-actor reinforcement learning for a class of stochastic nonlinear multi-agent systems☆
    Wen, Guoxing
    Niu, Ben
    ISA TRANSACTIONS, 2024, 155 : 1 - 10
  • [28] Joint Bidding Model of Electricity and Frequency Regulation Market With Wind Fire Storage Multi-agent Games Based on Improved Soft Actor-critic
    Ge X.
    Fan W.
    Fu Y.
    Li Y.
    Dianwang Jishu/Power System Technology, 2023, 47 (05): : 1920 - 1930
  • [29] Multi-Objective Prioritized Task Scheduler Using Improved Asynchronous Advantage Actor Critic (a3c) Algorithm in Multi Cloud Environment
    Mangalampalli, S. Sudheer
    Karri, Ganesh Reddy
    Mohanty, Sachi Nandan
    Ali, Shahid
    Ijaz Khan, Muhammad
    Abdullaev, Sherzod
    Alqahtani, Salman A.
    IEEE ACCESS, 2024, 12 : 11354 - 11377
  • [30] Multi-agent based distributed control of distributed energy storages using load data
    Sharma, Desh Deepak
    Singh, S. N.
    Lin, Jeremy
    JOURNAL OF ENERGY STORAGE, 2016, 5 : 134 - 145