Predicting Citywide Passenger Demand via Reinforcement Learning from Spatio-Temporal Dynamics

被引:4
|
作者
Ning, Xiaodong [1 ]
Yao, Lina [1 ]
Wang, Xianzhi [2 ]
Benatallah, Boualem [1 ]
Salim, Flora [3 ]
Haghighi, Pari Delir [4 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] Univ Technol Sydney, Sydney, NSW, Australia
[3] RMIT Univ, Melbourne, Vic, Australia
[4] Monash Univ, Clayton, Vic, Australia
来源
PROCEEDINGS OF THE 15TH EAI INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS SYSTEMS: COMPUTING, NETWORKING AND SERVICES (MOBIQUITOUS 2018) | 2018年
关键词
Reinforcement Learning; spatial-temporal dynamics; passenger demand prediction;
D O I
10.1145/3286978.3286991
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The global urbanization imposes unprecedented pressure on urban infrastructure and public resources. The population explosion has made it challenging to satisfy the daily needs of urban residents. 'Smart City' is a solution that utilizes different types of data collection sensors to help manage assets and resources intelligently and more efficiently. Under the Smart City umbrella, the primary research initiative in improving the efficiency of car-hailing services is to predict the citywide passenger demand to address the imbalance between the demand and supply. However, predicting the passenger demand requires analysis on various data such as historical passenger demand, crowd outflow, and weather information, and it remains challenging to discover the latent relationships among these data. To address this challenge, we propose to improve the passenger demand prediction via learning the salient spatialtemporal dynamics within a reinforcement learning framework. Our model employs an information selection mechanism to focus on the most distinctive data in historical observations. This mechanism can automatically adjust the information zone according to the prediction performance to find the optimal choice. It also ensures the prediction model to take full advantage of the available data by introducing the positive and excluding the negative correlations. We have conducted experiments on a large-scale real-world dataset that covers 1.5 million people in a major city in China. The results show our model outperforms state-of-the-art and a series of baselines by a large margin.
引用
收藏
页码:19 / 28
页数:10
相关论文
共 50 条
  • [41] Towards the automation of woven fabric draping via reinforcement learning and Extended Position Based Dynamics
    Blies, Patrick M.
    Keller, Sophia
    Kuenzer, Ulrich
    El Manyari, Yassine
    Maier, Franz
    Sause, Markus G. R.
    Wasserer, Marcel
    Hinterhoelzl, Roland M.
    JOURNAL OF MANUFACTURING PROCESSES, 2025, 141 : 336 - 350
  • [42] Achieving accurate trajectory predicting and tracking for autonomous vehicles via reinforcement learning-assisted control approaches
    Tan, Guangwen
    Li, Mengshan
    Hou, Biyu
    Zhu, Jihong
    Guan, Lixin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 135
  • [43] From Specification to Topology: Automatic Power Converter Design via Reinforcement Learning
    Fan, Shaoze
    Cao, Ningyuan
    Zhang, Shun
    Li, Jing
    Guo, Xiaoxiao
    Zhang, Xin
    2021 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN (ICCAD), 2021,
  • [44] Participants Selection for From-Scratch Mobile Crowdsensing via Reinforcement Learning
    Hu, Yunfan
    Wang, Jiangtao
    Wu, Bo
    Helal, Sumi
    2020 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS (PERCOM 2020), 2020,
  • [45] Shaping in Reinforcement Learning via Knowledge Transferred from Human-Demonstrations
    Wang Guofang
    Fang Zhou
    Li Ping
    Li Bo
    2015 34TH CHINESE CONTROL CONFERENCE (CCC), 2015, : 3033 - 3038
  • [46] A Stackelberg Game Model of Real-time Supply-demand Interaction and the Solving Method via Reinforcement Learning
    Bao T.
    Zhang X.
    Yu T.
    Liu X.
    Wang D.
    Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 2018, 38 (10): : 2947 - 2955
  • [47] Latent Dynamics for Artefact-Free Character Animation via Data-Driven Reinforcement Learning
    Gamage, Vihanga
    Ennis, Cathy
    Ross, Robert
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 675 - 687
  • [48] Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance
    Knox, W. Bradley
    Stone, Peter
    ARTIFICIAL INTELLIGENCE, 2015, 225 : 24 - 50
  • [49] Adaptive Video Streaming via Deep Reinforcement Learning from User Trajectory Preferences
    Xiao, Qingyu
    Ye, Jin
    Pang, Chengjie
    Ma, Liangdi
    Jiang, Wengchao
    2020 IEEE 39TH INTERNATIONAL PERFORMANCE COMPUTING AND COMMUNICATIONS CONFERENCE (IPCCC), 2020,
  • [50] Adaptive Fault-Tolerant Tracking Control for Affine Nonlinear Systems With Unknown Dynamics via Reinforcement Learning
    Roshanravan, Sajad
    Shamaghdari, Saeed
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (01) : 569 - 580