Trust criteria for artificial intelligence in health: normative and epistemic considerations

被引:9
作者
Kostick-Quenet, Kristin [1 ]
Lang, Benjamin H. [1 ,2 ]
Smith, Jared [1 ]
Hurley, Meghan [1 ]
Blumenthal-Barby, Jennifer [1 ]
机构
[1] Baylor Coll Med, Ctr Med Eth & Hlth Policy, Houston, TX 77030 USA
[2] Univ Oxford, Dept Philosophy, Oxford, England
关键词
Decision Making; Ethics-; Research; Quality of Health Care; AUTOMATION; EXPLANATIONS; NEED;
D O I
10.1136/jme-2023-109338
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool's computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarily epistemic in nature, focused on accuracy and validity of AI/ML estimates. Trust evaluations considered the nature, integrity and relevance of training data rather than the computational nature of algorithms themselves, suggesting a need to distinguish 'source' from 'functional' explainability. To a lesser extent, trust criteria were also relational (endorsement from others) and sometimes based on personal beliefs and experience. We discuss implications for promoting appropriate and responsible trust calibration for clinical decision-making use AI/ML.
引用
收藏
页码:544 / 551
页数:8
相关论文
共 44 条
[1]  
Achiam J, 2024, GPT-4 technical report, DOI DOI 10.48550/ARXIV.2303.08774
[2]  
Ajzen I., 1985, UNDERSTANDING ATTITU, P11, DOI DOI 10.1007/978-3-642-69746-3_2
[3]   Artificial Hallucinations in ChatGPT: Implications in Scientific Writing [J].
Alkaissi, Hussam ;
McFarlane, Samy I. .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (02)
[4]  
[Anonymous], 2021, LAYING HARMONISED RU
[5]   Beware explanations from AI in health care [J].
Babic, Boris ;
Gerke, Sara ;
Evgeniou, Theodoros ;
Cohen, I. Glenn .
SCIENCE, 2021, 373 (6552) :284-286
[6]   Algorithms on regulatory lockdown in medicine [J].
Babic, Boris ;
Gerke, Sara ;
Evgeniou, Theodoros ;
Cohen, I. Glenn .
SCIENCE, 2019, 366 (6470) :1202-1204
[7]   Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing [J].
Bauer, Kevin ;
von Zahn, Moritz ;
Hinz, Oliver .
INFORMATION SYSTEMS RESEARCH, 2023, 34 (04) :1582-1602
[8]  
Biermann J., 2022, ZEW - Centre for European Economic Research Discussion Paper No. 22-071, DOI [10.2139/ssrn.4326911, DOI 10.2139/SSRN.4326911]
[9]   The influence of task load and automation trust on deception detection [J].
Biros, DP ;
Daly, M ;
Gunsch, G .
GROUP DECISION AND NEGOTIATION, 2004, 13 (02) :173-189
[10]   Assessment of patients' and caregivers' informational and decisional needs for left ventricular assist device placement: Implications for informed consent and shared decision-making [J].
Blumenthal-Barby, Jennifer S. ;
Kostick, Kristin M. ;
Delgado, Estevan D. ;
Volk, Robert J. ;
Kaplan, Holland M. ;
Wilhelms, L. A. ;
McCurdy, Sheryl A. ;
Estep, Jerry D. ;
Loebe, Matthias ;
Bruce, Courtenay R. .
JOURNAL OF HEART AND LUNG TRANSPLANTATION, 2015, 34 (09) :1182-1189