Ensuring AI explainability in healthcare: problems and possible policy solutions

被引:6
作者
Aranovich, Tatiana de Campos [1 ]
Matulionyte, Rita [2 ]
机构
[1] Univ Fed Rio Grande do Sul, Law Sch, Porto Alegre, RS, Brazil
[2] Macquarie Univ, Law Sch, Sydney, NSW, Australia
关键词
Artificial intelligence; machine learning; transparency; explainability; medical device; regulatory approval; ARTIFICIAL-INTELLIGENCE; BLACK-BOX; EXPLANATION; DECISIONS; ALGORITHM; APPROVAL; BIAS;
D O I
10.1080/13600834.2022.2146395
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
AI promises to address health services' quality and cost challenges, however, errors and bias in medical devices decisions pose threats to human health and life. This has also led to the lack of trust in AI medical devices among clinicians and patients. The goal of this article is to assess whether AI explainability principle established in numerous ethical AI frameworks can help address these and other challenges posed by AI medical devices. We first define the AI explainability principle, delineate it from the AI transparency principle, and examine which stakeholders in healthcare sector would need AI to be explainable and for what purpose. Second, we analyze whether explainable AI in healthcare is capable of achieving its intended goals. Finally, we examine robust regulatory approval framework as an alternative - and a more suitable - way in addressing challenges caused by black-box AI.
引用
收藏
页码:259 / 275
页数:17
相关论文
共 22 条