共 50 条
SPRNN: A spatial-temporal recurrent neural network for crowd flow prediction
被引:10
作者:
Tang, Gaozhong
[1
]
Li, Bo
[1
]
Dai, Hong-Ning
[2
]
Zheng, Xi
[3
]
机构:
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou, Peoples R China
[2] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
[3] Macquarie Univ, Dept Comp, Sydney, Australia
基金:
中国国家自然科学基金;
澳大利亚研究理事会;
关键词:
Crowd flow prediction;
Spatial feature;
Temporal feature;
Road structural information;
Gated recurrent unit;
MODEL;
DEEP;
D O I:
10.1016/j.ins.2022.09.053
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
The capability of predicting the future trends of crowds has rendered crowd flow predic-tion more critical in building intelligent transportation systems, and attracted substantial research efforts. The trend of crowd flows is closely related to time and the urban topog-raphy. Therefore, extracting and leveraging both spatial features and temporal features are key gradients for effectively predicting crowd flows. Many previous works extract spa-tial features from crowd-flow data in an iteration way. As a result, models suffer from a heavy computation cost while ignoring details of road topology and structure information. Meanwhile, temporal features, including short-term features and long-term features, are separately extracted. The fusion of all features at the last stage before accomplishing the prediction also neglects the underlying associativity between various features. To address the limitations, we leverage spatial features by extracting structural information of road structures, such as road connection, road density, road width, etc. Rather than extracting spatial features from crowd-flow data, we capture them from images of city maps by adopting convolutional neural networks. Moreover, we implement a new sequence feature fusion mechanism to merge both spatial features and temporal features from various time scales so as to predict crowd flows. We conduct extensive experiments to evaluate our model on three benchmark datasets. The experimental results demonstrate that the model outperforms 15 state-of-the-art methods. The source code is available at: https:// github.com/CVisionProcessing/SPRNN.(c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:19 / 34
页数:16
相关论文
共 50 条