Joint Distributed Computation Offloading and Radio Resource Slicing Based on Reinforcement Learning in Vehicular Networks

被引:0
作者
Alaghbari, Khaled A. [1 ]
Lim, Heng-Siong [2 ,3 ]
Zarakovitis, Charilaos C. [3 ]
Latiff, N. M. Abdul [1 ]
Ariffin, Sharifah Hafizah Syed [1 ]
Chien, Su Fong [3 ]
机构
[1] Univ Teknol Malaysia, Fac Elect Engn, Johor Baharu 81310, Malaysia
[2] Multimedia Univ, Fac Engn & Technol, Melaka 75450, Malaysia
[3] Axon Log IKE, ICT Dept, Athens, Greece
来源
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY | 2025年 / 6卷
关键词
Servers; Cloud computing; Resource management; Q-learning; Computational efficiency; Costs; Vehicle dynamics; Quality of service; Energy consumption; Computational modeling; Computation offloading; radio resource slicing; reinforcement learning; distributed system; mobile-edge computing (MEC); cloud computing; Internet of Vehicles;
D O I
10.1109/OJCOMS.2025.3533093
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Computation offloading in Internet of Vehicles (IoV) networks is a promising technology for transferring computation-intensive and latency-sensitive tasks to mobile-edge computing (MEC) or cloud servers. Privacy is an important concern in vehicular networks, as centralized system can compromise it by sharing raw data from MEC servers with cloud servers. A distributed system offers a more attractive solution, allowing each MEC server to process data locally and make offloading decisions without sharing sensitive information. However, without a mechanism to control its load, the cloud server's computation capacity can become overloaded. In this study, we propose distributed computation offloading systems using reinforcement learning, such as Q-learning, to optimize offloading decisions and balance computation load across the network while minimizing the number of task offloading switches. We introduce both fixed and adaptive low-complexity mechanisms to allocate resources of the cloud server, formulating the reward function of the Q-learning method to achieve efficient offloading decisions. The proposed adaptive approach enables cooperative utilization of cloud resources by multiple agents. A joint optimization framework is established to maximize overall communication and computing resource utilization, where task offloading is performed on a small-time scale at local edge servers, while radio resource slicing is adjusted on a larger time scale at the cloud server. Simulation results using real vehicle tracing datasets demonstrate the effectiveness of the proposed distributed systems in achieving lower computation load costs, offloading switching costs, and reduce latency while increasing cloud server utilization compared to centralized systems.
引用
收藏
页码:1231 / 1245
页数:15
相关论文
共 20 条
[1]   Decentralized Deep Reinforcement Learning Meets Mobility Load Balancing [J].
Chang, Hao-Hsuan ;
Chen, Hao ;
Zhang, Jianzhong ;
Liu, Lingjia .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2023, 31 (02) :473-484
[2]  
Dab Boutheina, 2019, 2019 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), P45
[3]   Q-Learning-Based Task Offloading and Resources Optimization for a Collaborative Computing System [J].
Gao, Zihan ;
Hao, Wanming ;
Han, Zhuo ;
Yang, Shouyi .
IEEE ACCESS, 2020, 8 :149011-149024
[4]   Deep-Reinforcement-Learning-Based Distributed Computation Offloading in Vehicular Edge Computing Networks [J].
Geng, Liwei ;
Zhao, Hongbo ;
Wang, Jiayue ;
Kaushik, Aryan ;
Yuan, Shuai ;
Feng, Wenquan .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (14) :12416-12433
[5]   Mobility and connectivity in highway vehicular networks: A case study in Madrid [J].
Gramaglia, Marco ;
Trullols-Cruces, Oscar ;
Naboulsi, Diala ;
Fiore, Marco ;
Calderon, Maria .
COMPUTER COMMUNICATIONS, 2016, 78 :28-44
[6]   Joint Computation Offloading and Resource Allocation for Edge-Cloud Collaboration in Internet of Vehicles via Deep Reinforcement Learning [J].
Huang, Jiwei ;
Wan, Jiangyuan ;
Lv, Bofeng ;
Ye, Qiang ;
Chen, Ying .
IEEE SYSTEMS JOURNAL, 2023, 17 (02) :2500-2511
[7]  
Jiang F, 2020, IEEE INT CONF COMMUN, P460, DOI [10.1109/iccc49849.2020.9238925, 10.1109/ICCC49849.2020.9238925]
[8]   A Q-learning based Method for Energy-Efficient Computation Offloading in Mobile Edge Computing [J].
Jiang, Kai ;
Zhou, Huan ;
Li, Dawei ;
Liu, Xuxun ;
Xu, Shouzhi .
2020 29TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2020), 2020,
[9]   Task Offloading and Resource Allocation in Vehicular Networks: A Lyapunov-Based Deep Reinforcement Learning Approach [J].
Kumar, Anitha Saravana ;
Zhao, Lian ;
Fernando, Xavier .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (10) :13360-13373
[10]   Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning [J].
Liang, Le ;
Ye, Hao ;
Li, Geoffrey Ye .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (10) :2282-2292