Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals' preferences and concerns

被引:17
作者
Funer, Florian [1 ,2 ]
Liedtke, Wenke [3 ]
Tinnemeyer, Sara [1 ]
Klausen, Andrea Diana [4 ]
Schneider, Diana [5 ]
Zacharias, Helena U. [6 ,7 ]
Langanke, Martin [3 ]
Salloch, Sabine [1 ]
机构
[1] Hannover Med Sch, Inst Ethics Hist & Philosophy Med, Hannover, Germany
[2] Eberhard Karls Univ Tubingen, Inst Ethics & Hist Med, Tubingen, Germany
[3] Protestant Univ Appl Sci RWL, Dept Social Work, Bochum, Germany
[4] Rhein Westfal TH Aachen, Inst Med Informat, Aachen, Germany
[5] Fraunhofer Inst Syst & Innovat Res ISI, Competence Ctr Emerging Technol, Karlsruhe, Germany
[6] TU Braunschweig, Peter L Reichertz Inst Med Informat, Hannover, Germany
[7] Hannover Med Sch, Hannove, Germany
关键词
Ethics-; Medical; Decision Making; Ethics; Health Personnel; Information Technology;
D O I
10.1136/jme-2022-108814
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
引用
收藏
页码:6 / 11
页数:6
相关论文
共 38 条
[1]   Explainability for artificial intelligence in healthcare: a multidisciplinary perspective [J].
Amann, Julia ;
Blasimme, Alessandro ;
Vayena, Effy ;
Frey, Dietmar ;
Madai, Vince I. .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (01)
[2]  
Bjerring JC, 2021, Philosophy & Technology, V34, P349, DOI [10.1007/s13347-019-00391-6, 10.1007/s13347-019-00391-6, DOI 10.1007/S13347-019-00391-6]
[3]  
Bleher Hannah, 2022, AI Ethics, V2, P747, DOI [10.1007/s43681-022-00135-x, 10.1007/s43681-022-00135-x]
[4]   Primer on an ethics of AI-based decision support systems in the clinic [J].
Braun, Matthias ;
Hummel, Patrik ;
Beck, Susanne ;
Dabrock, Peter .
JOURNAL OF MEDICAL ETHICS, 2021, 47 (12) :E3
[5]   Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review [J].
Cartolovni, Anto ;
Tomicic, Ana ;
Mosler, Elvira Lazic .
INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2022, 161
[6]   Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability [J].
Coeckelbergh, Mark .
SCIENCE AND ENGINEERING ETHICS, 2020, 26 (04) :2051-2068
[7]  
Denekamp Y, 2007, ISR MED ASSOC J, V9, P771
[8]   Should we be afraid of medical AI? [J].
Di Nucci, Ezio .
JOURNAL OF MEDICAL ETHICS, 2019, 45 (08) :556-558
[9]   Barriers and facilitators to the adoption of electronic clinical decision support systems: a qualitative interview study with UK general practitioners [J].
Ford, Elizabeth ;
Edelman, Natalie ;
Somers, Laura ;
Shrewsbury, Duncan ;
Lopez Levy, Marcela ;
van Marwijk, Harm ;
Curcin, Vasa ;
Porat, Talya .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2021, 21 (01)
[10]   The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach [J].
Funer, Florian .
MEDICINE HEALTH CARE AND PHILOSOPHY, 2022, 25 (02) :167-178