Data-Aware Path Planning for Autonomous Vehicles Using Reinforcement Learning

被引:0
作者
AlSaqabi, Yousef [1 ,2 ]
Krishnamachari, Bhaskar [1 ]
机构
[1] Univ Southern Calif, Dept Elect & Comp Engn, Los Angeles, CA 90007 USA
[2] Kuwait Univ, Dept Elect Engn, Kuwait 72304, Kuwait
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 11期
关键词
autonomous vehicles; vehicle routing and navigation; vehicular networks; reinforcement learning; AD HOC NETWORKS;
D O I
10.3390/app15116099
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This paper addresses the challenge of optimizing path planning for autonomous vehicles in urban environments by considering both traffic and bandwidth variability on the road. Traditional path planning methods are inadequate for the needs of interconnected vehicles that require significant real-time data transfer. We propose a reinforcement learning approach for path planning, formulated to use road traffic conditions and bandwidth availability. This approach optimizes routes by minimizing travel time while maximizing data transfer capability. We create a realistic simulation environment using GraphML, incorporating real-world map data and vehicle mobility patterns to evaluate the effectiveness of our approach. Through comprehensive testing against various baselines, our reinforcement learning model demonstrates the ability to adapt and find optimal paths that significantly outperform conventional strategies. These results emphasize the feasibility of using reinforcement learning for dynamic path optimization and highlight its potential to improve both the efficiency of travel and the reliability of data-driven decisions in autonomous vehicular networks.
引用
收藏
页数:20
相关论文
共 83 条
[1]   New path planning model for mobile anchor-assisted localization in wireless sensor networks [J].
Alomari, Abdullah ;
Comeau, Frank ;
Phillips, William ;
Aslam, Nauman .
WIRELESS NETWORKS, 2018, 24 (07) :2589-2607
[2]  
Alsaqabi Y, 2023, Arxiv, DOI [arXiv:2309.12601, 10.48550/arXiv.2309.12601, DOI 10.48550/ARXIV.2309.12601]
[3]   Trip Planning for Autonomous Vehicles with Wireless Data Transfer Needs Using Reinforcement Learning [J].
AlSaqabi, Yousef ;
Krishnamachari, Bhaskar .
2022 IEEE 19TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2022), 2022, :10-17
[4]  
[Anonymous], 2008, UNIFIEDARCHITECTURE
[5]   Deep Reinforcement Learning for a Self-Driving Vehicle Operating Solely on Visual Information [J].
Audinys, Robertas ;
Slikas, Zygimantas ;
Radkevicius, Justas ;
Sutas, Mantas ;
Ostreika, Armantas .
ELECTRONICS, 2025, 14 (05)
[6]   Multi-Robot Path Planning Method Using Reinforcement Learning [J].
Bae, Hyansu ;
Kim, Gidong ;
Kim, Jonguk ;
Qian, Dianwei ;
Lee, Sukgyu .
APPLIED SCIENCES-BASEL, 2019, 9 (15)
[7]  
Bast H, 2016, LECT NOTES COMPUT SC, V9220, P19, DOI 10.1007/978-3-319-49487-6_2
[8]   A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems [J].
Bhasin, S. ;
Kamalapurkar, R. ;
Johnson, M. ;
Vamvoudakis, K. G. ;
Lewis, F. L. ;
Dixon, W. E. .
AUTOMATICA, 2013, 49 (01) :82-92
[9]  
Bishop C.M., 2006, Pattern recognition and machine learning, DOI 10.1007/978-0-387-45528-0
[10]  
Bouton M, 2019, IEEE INT C INTELL TR, P3441, DOI [10.1109/ITSC.2019.8916924, 10.1109/itsc.2019.8916924]