共 57 条
Explainable artificial intelligence in transport Logistics: Risk analysis for road accidents
被引:13
作者:
Abdulrashid, Ismail
[1
]
Farahani, Reza Zanjirani
[2
]
Mammadov, Shamkhal
[3
]
Khalafalla, Mohamed
[4
]
Chiang, Wen-Chyuan
[1
]
机构:
[1] Univ Tulsa, Collins Coll Business, Sch Finance & Operat Management, 800 South Tucker Dr, Tulsa, OK 74104 USA
[2] Rennes Sch Business, 2 Rue Robert Arbrissel, F-35065 Rennes, France
[3] Univ Tulsa, Coll Engn, McDougall Sch Petr Engn, Tulsa, OK 74104 USA
[4] Florida A&M Univ, Div Engn Technol, Tallahassee, FL 32307 USA
关键词:
Transport logistics;
Explainable artificial intelligence;
Road safety;
Injury severity;
Prediction and mitigation;
Taxonomy;
INJURY SEVERITY;
CRASHES;
PREDICTION;
DEPENDENCE;
MODELS;
D O I:
10.1016/j.tre.2024.103563
中图分类号:
F [经济];
学科分类号:
02 ;
摘要:
Automobile traffic accidents represent a significant threat to global public safety, resulting in numerous injuries and fatalities annually. This paper introduces a comprehensive, explainable artificial intelligence (XAI) artifact design, integrating accident data for utilization by diverse stakeholders and decision-makers. It proposes responsible, explanatory, and interpretable models with a systems-level taxonomy categorizing aspects of driver-related behaviors associated with varying injury severity levels, thereby contributing theoretically to explainable analytics. In the initial phase, we employed various advanced techniques such as data missing at random (MAR) with Bayesian dynamic conditional imputation for addressing missing records, synthetic minority oversampling technique for data imbalance issues, and categorical boosting (CatBoost) combined with SHapley Additive exPlanations (SHAP) for determining and analyzing the importance and dependence of risk factors on injury severity. Additionally, exploratory feature analysis was conducted to uncover hidden spatiotemporal elements influencing traffic accidents and injury severity levels. We developed several predictive models in the second phase, including eXtreme Gradient Boosting (XGBoost), random forest (RF), deep neural networks (DNN), and fine-tuned parameters. Using the SHAP approach, we employed model-agnostic interpretation techniques to separate explanations from models. In the final phase, we provided an analysis and summary of the system-level taxonomy across feature categories. This involved classifying crash data into high-level causal factors using aggregate SHAP scores, illustrating how each risk factor contributes to different injury severity levels.
引用
收藏
页数:19
相关论文