Interpretability of machine learning-based prediction models in healthcare

被引:262
作者
Stiglic, Gregor [1 ,2 ]
Kocbek, Primoz [1 ]
Fijacko, Nino [1 ]
Zitnik, Marinka [3 ]
Verbert, Katrien [4 ]
Cilar, Leona [1 ]
机构
[1] Univ Maribor, Fac Hlth Sci, Maribor 2000, Slovenia
[2] Univ Maribor, Fac Elect Engn & Comp Sci, Maribor, Slovenia
[3] Harvard Univ, Dept Biomed Informat, Cambridge, MA 02138 USA
[4] Katholieke Univ Leuven, Dept Comp Sci, Leuven, Belgium
关键词
interpretability; machine learning; model agnostic; model specific; prediction models; CLASSIFICATION; DECISIONS;
D O I
10.1002/widm.1379
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There is a need of ensuring that learning (ML) models are interpretable. Higher interpretability of the model means easier comprehension and explanation of future predictions for end-users. Further, interpretable ML models allow healthcare experts to make reasonable and data-driven decisions to provide personalized decisions that can ultimately lead to higher quality of service in healthcare. Generally, we can classify interpretability approaches in two groups where the first focuses on personalized interpretation (local interpretability) while the second summarizes prediction models on a population level (global interpretability). Alternatively, we can group interpretability methods into model-specific techniques, which are designed to interpret predictions generated by a specific model, such as a neural network, and model-agnostic approaches, which provide easy-to-understand explanations of predictions made by any ML model. Here, we give an overview of interpretability approaches using structured data and provide examples of practical interpretability of ML in different areas of healthcare, including prediction of health-related outcomes, optimizing treatments, or improving the efficiency of screening for specific conditions. Further, we outline future directions for interpretable ML and highlight the importance of developing algorithmic solutions that can enable ML driven decision making in high-stakes healthcare problems. This article is categorized under: Application Areas > Health Care
引用
收藏
页数:13
相关论文
共 93 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
Ahmad MA, 2018, ACM-BCB'18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, P559, DOI [10.1145/3233547.3233667, 10.1109/ICHI.2018.00095]
[3]  
ALAOUI SS, 2019, INT C BIG DAT NETW T, P59
[4]  
[Anonymous], 2019, ARTIF INTELL
[5]  
[Anonymous], 2017, FAT ML KDD
[6]  
[Anonymous], 2020, SMART ASSISTED LIVIN
[7]  
ARO TO, 2019, DAFFODIL INT U J SCI, V14, P9
[8]   "What is relevant in a text document?": An interpretable machine learning approach [J].
Arras, Leila ;
Horn, Franziska ;
Montavon, Gregoire ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2017, 12 (08)
[9]  
Bibal A, 2016, ESANN
[10]  
Bratko Ivan., 1997, Learning, Networks and Statistics, P163