Enabling Efficient Vehicle-Road Cooperation Through AIoT: A Deep Learning Approach to Computational Offloading

被引:2
|
作者
Wang, Xin [1 ]
Alassafi, Madini O. [2 ,3 ]
Alsaadi, Fawaz E. [2 ,3 ]
Xue, Xingsi [4 ]
Zou, Longhao [5 ]
Liu, Zhonghua [6 ]
机构
[1] Northeastern Univ, Sch Informat Sci & Engn, Shenyang 110819, Peoples R China
[2] King Abdulaziz Univ, Dept Informat Technol, Jeddah 80221, Saudi Arabia
[3] King Abdulaziz Univ, Fac Comp & Informat Technol, Jeddah 80221, Saudi Arabia
[4] Fujian Univ Technol, Fujian Prov Key Lab Big Data Min & Applicat, Fuzhou 350118, Peoples R China
[5] Peng Cheng Lab, Dept Broadband Commun, Shenzhen 518057, Peoples R China
[6] China Med Univ, Shengjing Hosp, Dept Comp Ctr, Shenyang 110004, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 22期
基金
中国国家自然科学基金;
关键词
Task analysis; Internet of Things; Artificial intelligence; Collaboration; Computational modeling; Servers; Energy consumption; Augmented Intelligence of Things (AIoT); deep reinforcement learning; resource scheduling; twin delayed deep deterministic policy gradient (TD3); vehicle-road cooperation (VRC) system; INTERNET;
D O I
10.1109/JIOT.2024.3445642
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The integration of Artificial Intelligence with the Internet of Things significantly enhances the functionality of vehicle-road cooperation (VRC) systems by enabling smarter, real-time decision-making and resource optimization across interconnected vehicular networks. To tackle the challenges associated with resource constraints, this study introduces a method where vehicle users can offload tasks to nearby roadside units (RSUs) or service-oriented vehicles to ensure timely application execution. However, this task offloading introduces additional transmission delays and energy expenditures. Consequently, this article first conceptualizes the computation offloading problem, aiming to minimize the total task processing time and energy consumption under the constraints of resources provided by RSUs and service-oriented vehicles. We model the computation offloading issue within the VRC framework as a Markov decision process (MDP) and propose a multiagent reinforcement learning-based resource scheduling method. Each vehicle, acting as an intelligent agent, interacts with and influences decisions within this environment. The method integrates the twin delayed deep deterministic policy gradient algorithm to train deep neural networks for deciding on task offloading and computational resource allocation. Simulation results demonstrate that compared to existing algorithms, the proposed method more effectively utilizes the computational resources available through RSUs and service-oriented vehicles within the VRC system. It achieves joint optimization of latency and energy consumption, thus validating the efficacy of the proposed approach in enhancing the operational efficiency and sustainability of urban transportation systems.
引用
收藏
页码:36127 / 36139
页数:13
相关论文
共 21 条
  • [1] AEFL: Anonymous and Efficient Federated Learning in Vehicle-Road Cooperation Systems With Augmented Intelligence of Things
    Wang, Xiaoding
    Li, Jiadong
    Lin, Hui
    Dai, Cheng
    Garg, Sahil
    Kaddoum, Georges
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (22): : 36157 - 36167
  • [2] A Deep Learning Approach for Energy Efficient Computational Offloading in Mobile Edge Computing
    Ali, Zaiwar
    Jiao, Lei
    Baker, Thar
    Abbas, Ghulam
    Abbas, Ziaul Haq
    Khaf, Sadia
    IEEE ACCESS, 2019, 7 : 149623 - 149633
  • [3] Smart Medical Rescue via Efficient Vehicle Road Cooperation: AIoT Framework
    Lyu, Xiaohong
    Rani, Shalli
    Manimurugan, S.
    Feng, Yanhong
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (03): : 3373 - 3384
  • [4] Intelligent Computation Offloading for MEC-Based Cooperative Vehicle Infrastructure System: A Deep Reinforcement Learning Approach
    Yang, Heng
    Wei, Zhiqing
    Feng, Zhiyong
    Chen, Xu
    Li, Yiheng
    Zhang, Ping
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (07) : 7665 - 7679
  • [5] Energy Efficient Joint Computation Offloading and Service Caching for Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Zhou, Huan
    Zhang, Zhenyu
    Wu, Yuan
    Dong, Mianxiong
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2023, 7 (02): : 950 - 961
  • [6] Learning to Help Emergency Vehicles Arrive Faster: A Cooperative Vehicle-Road Scheduling Approach
    Ding, Lige
    Zhao, Dong
    Wang, Zhaofeng
    Wang, Guang
    Tan, Chang
    Fan, Lei
    Ma, Huadong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (10) : 5949 - 5962
  • [7] An Efficient Online Computation Offloading Approach for Large-Scale Mobile Edge Computing via Deep Reinforcement Learning
    Hu, Zheyuan
    Niu, Jianwei
    Ren, Tao
    Dai, Bin
    Li, Qingfeng
    Xu, Mingliang
    Das, Sajal K.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2022, 15 (02) : 669 - 683
  • [8] Deep learning-based energy-efficient computational offloading strategy in heterogeneous fog computing networks
    Sarkar, Indranil
    Kumar, Sanjay
    JOURNAL OF SUPERCOMPUTING, 2022, 78 (13) : 15089 - 15106
  • [9] Enhanced deep reinforcement learning model with bird's eye view design strategy for decision control in vehicle-road collaboration
    Luo, Yitao
    Zhang, Runde
    Chen, Zhuyun
    Zheng, Shaowu
    Yu, Shanhu
    Li, Weihua
    CONTROL ENGINEERING PRACTICE, 2025, 159
  • [10] Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme
    Ning, Zhaolong
    Dong, Peiran
    Wang, Xiaojie
    Guo, Liang
    Rodrigues, Joel
    Kong, Xiangjie
    Huang, Jun
    Kwok, Ricky Y. K.
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (04) : 1060 - 1072