A Ride-Hailing Company Supply Demand Prediction Using Recurrent Neural Networks, GRU and LSTM

被引:0
作者
Fathi, Sahand [1 ]
Fathi, Soheil [2 ]
Balali, Vahid [3 ]
机构
[1] Calif State Univ Long Beach, Long Beach, CA 90840 USA
[2] Univ Florida, UrbSys Lab, Gainesville, FL 32611 USA
[3] Calif State Univ Long Beach, Dept Civil Engn & Construct Engn Management, Long Beach, CA 90840 USA
来源
INTELLIGENT COMPUTING, VOL 3, 2024 | 2024年 / 1018卷
关键词
Recurrent Neural Networks; Long Short-Term Memory; Gated Recurrent Units;
D O I
10.1007/978-3-031-62269-4_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In today's fast-paced and dynamic market, the ability to predict supply and demand accurately is a critical aspect of a business's success. Companies that can anticipate their customers' needs and efficiently allocate resources to meet them can gain a competitive advantage. With the advent of advanced data analytics and machine learning algorithms, businesses can now leverage large datasets to make informed decisions about their operations. Snapp is a ride-hailing company that operates in Tehran, Iran. The company has a large fleet of vehicles and a large number of users. In order to ensure that there are enough vehicles available to meet the demand, Snapp needs to be able to predict the demand for rides in different parts of the city. Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Gated Recurrent Units (GRUs) are all powerful types of neural networks designed to handle sequential data. RNNs are a type of artificial neural network designed to recognize patterns in sequences of data, such as time series or text. They are known for their 'memory' function, which can carry information across lengthy sequences, thus making them suitable for tasks such as predicting time series data, speech recognition, or machine translation. In this research, we collect historical data on Snapp Inc.'s ride-hailing services, including pickup and drop-off locations, dates, times, and the number of ride requests in each region. We preprocess the data, train the models, and evaluate their performance using various metrics, including Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).
引用
收藏
页码:123 / 133
页数:11
相关论文
共 19 条
[11]  
Luo F.-L., 2022, Google Patents
[12]  
Mandic D., 2001, ADAPT LEARN SYST SIG
[13]  
Ogunmolu O, 2016, Arxiv, DOI [arXiv:1610.01439, DOI 10.48550/ARXIV.1610.01439]
[14]   LEARNING REPRESENTATIONS BY BACK-PROPAGATING ERRORS [J].
RUMELHART, DE ;
HINTON, GE ;
WILLIAMS, RJ .
NATURE, 1986, 323 (6088) :533-536
[15]  
Sak H, 2014, INTERSPEECH, P338
[16]  
Tan J.P., 2022, 2022 3 INT C EM TECH, P1
[17]   Dynamical time series emb e ddings in recurrent neural networks [J].
Uribarri, Gonzalo ;
Mindlin, Gabriel B. .
CHAOS SOLITONS & FRACTALS, 2022, 154
[18]  
Wu YH, 2016, Arxiv, DOI arXiv:1609.08144
[19]  
Yamak P.T., 2019, P 2019 2 INT C ALG C, P49, DOI [10.1145/3377713.3377722, DOI 10.1145/3377713.3377722]