Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

被引:0
作者
Julia Amann
Alessandro Blasimme
Effy Vayena
Dietmar Frey
Vince I. Madai
机构
[1] ETH Zurich,Health Ethics and Policy Lab, Department of Health Sciences and Technology
[2] Charité - Universitätsmedizin Berlin,Charité Lab for Artificial Intelligence in Medicine—CLAIM
[3] Birmingham City University,School of Computing and Digital Technology, Faculty of Computing, Engineering and the Built Environment
来源
BMC Medical Informatics and Decision Making | / 20卷
关键词
Artificial intelligence; Machine learning; Explainability; Interpretability; Clinical decision support;
D O I
暂无
中图分类号
学科分类号
摘要
引用
收藏
相关论文
共 122 条
[1]  
Higgins D(2020)From bit to bedside: a practical framework for artificial intelligence product development in healthcare Adv Intell Syst. 2 2000052-215
[2]  
Madai VI(2019)Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead Nat Mach Intell 1 206-2200
[3]  
Rudin C(2018)Clinical decision support in the era of artificial intelligence JAMA 320 2199-453
[4]  
Shortliffe EH(2019)Dissecting racial bias in an algorithm used to manage the health of populations Science 366 447-29
[5]  
Sepúlveda MJ(2019)A guide to deep learning in healthcare Nat Med 25 24-203
[6]  
Obermeyer Z(2019)Unmasking Clever Hans predictors and assessing what machines really learn Nat Commun 10 1096-101723
[7]  
Powers B(2018)Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study PLOS Med 15 e1002683-5
[8]  
Vogeli C(2019)Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications Int J Law Inf Technol 27 171-781
[9]  
Mullainathan S(2020)Identifying the “right” level of explanation in a given situation SSRN Electron J 4 320-1324
[10]  
Esteva A(2018)Machine learning in medicine: opening the new data protection black box Eur Data Prot Law Rev EDPL 12 e0174944-11