Should AI models be explainable to clinicians?

被引:30
作者
Abgrall, Gwenole [1 ,2 ]
Holder, Andre L. [3 ]
Dagdia, Zaineb Chelly [4 ]
Zeitouni, Karine [4 ]
Monnet, Xavier [1 ]
机构
[1] Univ Paris Saclay, Serv Med Intens Reanimat, Hopital Bicetre, AP HP,DMU CORREVE 4,Inserm,UMR S 999,FHU SEPSIS,CA, 78 Rue Gen Leclerc, F-94270 Le Kremlin Bicetre, France
[2] Ctr Hosp Univ Grenoble Alpes, Serv Med Intens Reanimat, Av Maquis Gresivaudan, F-38700 La Tronche, France
[3] Emory Univ, Sch Med, Dept Med, Div Pulm Crit Care Allergy & Sleep Med, Atlanta, GA USA
[4] Univ Versailles St Quentin En Yvelines, Lab DAVID, F-78035 Versailles, France
关键词
Explainable artificial intelligence; Interpretability; Clinical decision-making; Regulatory compliance; Algorithmic bias; Patient autonomy; Fairness; Transparency; Generative artificial intelligence; DECISION-MAKING;
D O I
10.1186/s13054-024-05005-y
中图分类号
R4 [临床医学];
学科分类号
1002 ; 100602 ;
摘要
In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.
引用
收藏
页数:8
相关论文
共 49 条
[1]  
[Anonymous], 2016, REGULATION EU 201667
[2]  
Biden JR, 2023, Executive order on the safe, secure, and trustworthy development and use of artificial intelligence
[3]   Solving the explainable AI conundrum by bridging clinicians' needs and developers' goals [J].
Bienefeld, Nadine ;
Boss, Jens Michael ;
Luthy, Rahel ;
Brodbeck, Dominique ;
Azzati, Jan ;
Blaser, Mirco ;
Willms, Jan ;
Keller, Emanuela .
NPJ DIGITAL MEDICINE, 2023, 6 (01)
[4]   Hello Ai: Uncovering the onboarding needs of medical practitioners for human–AI collaborative decision-making [J].
Cai, Carrie J. ;
Winter, Samantha ;
Steiner, David ;
Wilcox, Lauren ;
Terry, Michael .
Proceedings of the ACM on Human-Computer Interaction, 2019, 3 (CSCW)
[5]  
Casey B, 2019, Rethinking Explainable Machines: The GDPR's Right to Explanation Debate and the Rise of Algorithmic Audits in Enterprise
[7]   Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders [J].
Cheng, Hao-Fei ;
Wang, Ruotong ;
Zhang, Zheng ;
O'Connell, Fiona ;
Gray, Terrance ;
Harper, F. Maxwell ;
Zhu, Haiyi .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[8]  
Chung NC, 2024, Arxiv, DOI arXiv:2405.03820
[9]   Dynamic survival prediction in intensive care units from heterogeneous time series without the need for variable selection or curation [J].
Deasy, Jacob ;
Lio, Pietro ;
Ercole, Ari .
SCIENTIFIC REPORTS, 2020, 10 (01)
[10]   AI for radiographic COVID-19 detection selects shortcuts over signal [J].
DeGrave, Alex J. ;
Janizek, Joseph D. ;
Lee, Su-In .
NATURE MACHINE INTELLIGENCE, 2021, 3 (07) :610-619