Multi-Agent Multi-Armed Bandit Learning for Offloading Delay Minimization in V2X Networks

被引:3
|
作者
Nang Hung Nguyen [1 ]
Phi Le Nguyen [1 ]
Hieu Dinh [1 ]
Thanh Hung Nguyen [1 ]
Kien Nguyen [2 ]
机构
[1] Hanoi Univ Sci & Technol, Sch Informat & Commun Technol, Hanoi, Vietnam
[2] Chiba Univ, Grad Sch Engn, Chiba, Japan
来源
2021 IEEE 19TH INTERNATIONAL CONFERENCE ON EMBEDDED AND UBIQUITOUS COMPUTING (EUC 2021) | 2021年
关键词
ALLOCATION;
D O I
10.1109/EUC53437.2021.00016
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In a three-tier Vehicle to X (V2X) network, a vehicle can unload the computational tasks to the edge computing component at a roadside unit (RSU) or a base station with cloud computing (gNB). Moreover, an RSU can also offload to gNB, forming three offloading paths: vehicle-to-RSU, vehicle-to-gNB, and RSU-to-gNB. This paper aims to minimize the offloaded tasks' average latency while dealing with the network dynamic. Note that the existing works assume the fixed network parameters, hence have failed to address the dynamic. As a solution, we use the multi-agent multi-armed bandits (MBA) learning for offloading that can adapt to the network dynamic and optimize the latency. More importantly, we propose a new MBA offloading scheme with an exploration mechanise based on the Sigmoid function. We conduct an extensive evaluation to evaluate and show the superiority of our proposal. First, the proposed Sigmoid exploration mechanism reduces the tasks' average latency by 35% compared to a basic MBA using negative rewarding. Second, the simulation results show our proposed offloading algorithm shortens the task latency by 18.5% on average and 56.9% in the best case, compared to the state-of-the-art.
引用
收藏
页码:47 / 55
页数:9
相关论文
共 50 条
  • [1] Multi-agent Multi-armed Bandit Learning for Content Caching in Edge Networks
    Su, Lina
    Zhou, Ruiting
    Wang, Ne
    Chen, Junmei
    Li, Zongpeng
    2022 IEEE INTERNATIONAL CONFERENCE ON WEB SERVICES (IEEE ICWS 2022), 2022, : 11 - 16
  • [2] Cooperative Learning for Adversarial Multi-Armed Bandit on Open Multi-Agent Systems
    Nakamura, Tomoki
    Hayashi, Naoki
    Inuiguchi, Masahiro
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 1712 - 1717
  • [3] Decentralized Multi-Agent Multi-Armed Bandit Learning With Calibration for Multi-Cell Caching
    Xu, Xianzhe
    Tao, Meixia
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (04) : 2457 - 2472
  • [4] MULTI-ARMED BANDITS IN MULTI-AGENT NETWORKS
    Shahrampour, Shahin
    Rakhlin, Alexander
    Jadbabaie, Ali
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2786 - 2790
  • [5] Collaborative Multi-Agent Multi-Armed Bandit Learning for Small-Cell Caching
    Xu, Xianzhe
    Tao, Meixia
    Shen, Cong
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (04) : 2570 - 2585
  • [6] A Dynamic Observation Strategy for Multi-agent Multi-armed Bandit Problem
    Madhushani, Udari
    Leonard, Naomi Ehrich
    2020 EUROPEAN CONTROL CONFERENCE (ECC 2020), 2020, : 1677 - 1682
  • [7] Multi-Agent Multi-Armed Bandit Learning for Online Management of Edge-Assisted Computing
    Wu, Bochun
    Chen, Tianyi
    Ni, Wei
    Wang, Xin
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (12) : 8188 - 8199
  • [8] Multi-Agent Multi-Armed Bandit Learning for Grant-Free Access in Ultra-Dense IoT Networks
    Raza, Muhammad Ahmad
    Abolhasan, Mehran
    Lipman, Justin
    Shariati, Negin
    Ni, Wei
    Jamalipour, Abbas
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (04) : 1356 - 1370
  • [9] An Efficient Algorithm for Fair Multi-Agent Multi-Armed Bandit with Low Regret
    Jones, Matthew
    Huy Nguyen
    Thy Nguyen
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 8159 - 8167
  • [10] Decentralized Randomly Distributed Multi-agent Multi-armed Bandit with Heterogeneous Rewards
    Xu, Mengfan
    Klabjan, Diego
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,