Why did you predict that? Towards explainable artificial neural networks for travel demand analysis

被引:18
作者
Alwosheel, Ahmad [1 ,2 ]
van Cranenburgh, Sander [2 ]
Chorus, Caspar G. [2 ]
机构
[1] King Abdulaziz City Sci & Technol, Riyadh, Saudi Arabia
[2] Delft Univ Technol, Dept Engn Syst & Serv, Transport & Logist Grp, Delft, Netherlands
关键词
Travel choice analysis; Explainability; Black-box issue; Artificial Neural Networks; MODE; DECISIONS; BEHAVIOR;
D O I
10.1016/j.trc.2021.103143
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
Artificial Neural Networks (ANNs) are rapidly gaining popularity in transportation research in general and travel demand analysis in particular. While ANNs typically outperform conventional methods in terms of predictive performance, they suffer from limited explainability. That is, it is very difficult to assess whether or not particular predictions made by an ANN are based on intuitively reasonable relationships embedded in the model. As a result, it is difficult for analysts to gain trust in ANNs. In this paper, we show that often-used approaches using perturbation (sensitivity analysis) are ill-suited for gaining an understanding of the inner workings of ANNs. Subsequently, and this is the main contribution of this paper, we introduce to the domain of transportation an alternative method, inspired by recent progress in the field of computer vision. This method is based on a re-conceptualisation of the idea of 'heat maps' to explain the predictions of a trained ANN. To create a heat map, a prediction of an ANN is propagated backward in the ANN towards the input variables, using a technique called Layer-wise Relevance Propagation (LRP). The resulting heat map shows the contribution of each input value -for example the travel time of a certain mode- to a given travel mode choice prediction. By doing this, the LRPbased heat map reveals the rationale behind the prediction in a way that is understandable to human analysts. If the rationale makes sense to the analyst, the trust in the prediction, and, by extension, in the trained ANN as a whole, will increase. If the rationale does not make sense, the analyst may choose to adapt or re-train the ANN or decide not to use it at all. We show that by reconceptualising the LRP methodology towards the choice modelling and travel demand analysis contexts, it can be put to effective use in application domains well beyond the field of computer vision, for which it was originally developed.
引用
收藏
页数:18
相关论文
共 59 条
[1]  
Abu-Mostafa YS, 2012, LEARNING DATA, V4
[2]  
Adebayo J, 2018, ADV NEUR IN, V31
[3]  
Alber M, 2019, J MACH LEARN RES, V20
[4]  
Alwosheel A., 2017, INT CHOIC MOD C 2017
[5]   'Computer says no' is not enough: Using prototypical examples to diagnose artificial neural networks for discrete choice analysis [J].
Alwosheel, Ahmad ;
van Cranenburgh, Sander ;
Chorus, Caspar G. .
JOURNAL OF CHOICE MODELLING, 2019, 33
[6]   Is your dataset big enough? Sample size requirements when using artificial neural networks for discrete choice analysis [J].
Alwosheel, Ahmad ;
van Cranenburgh, Sander ;
Chorus, Caspar G. .
JOURNAL OF CHOICE MODELLING, 2018, 28 :167-182
[7]  
Ancona M., 2017, EXPLAINING VISUALIZI
[8]  
[Anonymous], 2017, P 34 INT C MACH LEAR
[9]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[10]  
Batista GE., 2004, ACM SIGKDD EXPL NEWS, V6, P20, DOI [DOI 10.1145/1007730.1007735, 10.1145/1007730.1007735]