Interpretable Spatiotemporal Deep Learning Model for Traffic Flow Prediction based on Potential Energy Fields

被引:17
作者
Ji, Jiahao [1 ]
Wang, Jingyuan [2 ,3 ]
Jiang, Zhe [4 ]
Ma, Jingtian [1 ]
Zhang, Hu [1 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, Beijing, Peoples R China
[2] Beihang Univ, State Key Lab Software Dev Environm, Beijing, Peoples R China
[3] Beihang Univ, MOE Engn Res Ctr ACAT, Beijing, Peoples R China
[4] Univ Alabama, Dept Comp Sci, Tuscaloosa, AL 35487 USA
来源
20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020) | 2020年
基金
中国国家自然科学基金;
关键词
Potential Energy Fields; Spatiotemporal Model; Interpretable Prediction; Deep Learning; CONVOLUTION NETWORK;
D O I
10.1109/ICDM50108.2020.00128
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traffic flow prediction is of great importance in traffic management and public safety, but is challenging due to the complex spatial-temporal dependencies as well as temporal dynamics. Existing work either focuses on traditional statistical models, which have limited prediction accuracy, or relies on black-box deep learning models, which have superior prediction accuracy but are hard to interpret. In contrast, we propose a novel interpretable spatiotemporal deep learning model for traffic flow prediction. Our main idea is to model the physics of traffic flow through a number of latent Spatio-Temporal Potential Energy Fields (ST-PEFs), similar to water flow driven by the gravity field. We develop a Wind field Decomposition (WD) algorithm to decompose traffic flow into poly-tree components so that ST-PEFs can be established. We then design a spatiotemporal deep learning model for the ST-PEFs, which consists of a temporal component (modeling the temporal correlation) and a spatial component (modeling the spatial dependencies). To the best of our knowledge, this is the first work that make traffic flow prediction based on ST-PEFs. Experimental results on real-world traffic datasets show the effectiveness of our model compared to the existing methods. A case study confirms our model interpretability.
引用
收藏
页码:1076 / 1081
页数:6
相关论文
共 20 条
  • [1] [Anonymous], EMNLP
  • [2] [Anonymous], 2019, AAAI
  • [3] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [4] Interactive Temporal Recurrent Convolution Network for Traffic Predictionin Data Centers
    Cao, Xiaofeng
    Zhong, Yuhua
    Zhou, Yun
    Wang, Jiang
    Zhu, Cheng
    Zhang, Weiming
    [J]. IEEE ACCESS, 2018, 6 : 5276 - 5289
  • [5] Du M., 2019, COMMUNICATIONS ACM
  • [6] Geng X, 2019, AAAI CONF ARTIF INTE, P3656
  • [7] A Survey of Methods for Explaining Black Box Models
    Guidotti, Riccardo
    Monreale, Anna
    Ruggieri, Salvatore
    Turin, Franco
    Giannotti, Fosca
    Pedreschi, Dino
    [J]. ACM COMPUTING SURVEYS, 2019, 51 (05)
  • [8] Lundberg SM, 2017, ADV NEUR IN, V30
  • [9] Nash-Williams C. S. J., 1961, J LONDON MATH SOC
  • [10] Newman MEJ, 2004, PHYS REV E, V69, DOI 10.1103/PhysRevE.69.066133