Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives

被引:3
作者
Maris, Menno T. [1 ]
Kocar, Ayca [2 ]
Willems, Dick L. [1 ]
Pols, Jeannette [1 ,3 ]
Tan, Hanno L. [4 ,5 ]
Lindinger, Georg L. [2 ]
Bak, Marieke A. R. [1 ,6 ]
机构
[1] Univ Amsterdam, Dept Eth Law & Humanities, Amsterdam UMC, Amsterdam, Netherlands
[2] Univ Bayreuth, Inst Healthcare Management & Hlth Sci, Bayreuth, Germany
[3] Univ Amsterdam, Dept Anthropol, Amsterdam, Netherlands
[4] Univ Amsterdam, Dept Clin & Expt Cardiol, Amsterdam UMC, Amsterdam, Netherlands
[5] Netherlands Heart Inst, Utrecht, Netherlands
[6] Tech Univ Munich, Inst Hist & Eth Med, TUM Sch Med, Munich, Germany
基金
欧盟地平线“2020”;
关键词
Artificial intelligence; Ethics; Sudden cardiac death; Patient values; PROFID; Implantable cardioverter defibrillator; Personalized medicine; QUALITATIVE RESEARCH;
D O I
10.1186/s12910-024-01042-y
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
Background The emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD).Aim Explore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD).Methods Semi-structured, future scenario-based interviews were conducted among patients who had either an ICD and/or a heart condition with increased risk of SCD in Germany (n = 9) and the Netherlands (n = 15). We used the principles of the European Commission's Ethics Guidelines for Trustworthy AI to structure the interviews.Results Six themes arose from the interviews: the ability of AI to rectify human doctors' limitations; the objectivity of data; whether AI can serve as second opinion; AI explainability and patient trust; the importance of the 'human touch'; and the personalization of care. Overall, our results reveal a strong desire among patients for more personalized and patient-centered care in the context of ICD implantation. Participants in our study express significant concerns about the further loss of the 'human touch' in healthcare when AI is introduced in clinical settings. They believe that this aspect of care is currently inadequately recognized in clinical practice. Participants attribute to doctors the responsibility of evaluating AI recommendations for clinical relevance and aligning them with patients' individual contexts and values, in consultation with the patient.Conclusion The 'human touch' patients exclusively ascribe to human medical practitioners extends beyond sympathy and kindness, and has clinical relevance in medical decision-making. Because this cannot be replaced by AI, we suggest that normative research into the 'right to a human doctor' is needed. Furthermore, policies on patient-centered AI integration in clinical practice should encompass the ethics of everyday practice rather than only principle-based ethics. We suggest that an empirical ethics approach grounded in ethnographic research is exceptionally well-suited to pave the way forward.
引用
收藏
页数:15
相关论文
共 71 条
  • [1] Exploring patient perspectives on how they can and should be engaged in the development of artificial intelligence (AI) applications in health care
    Adus, Samira
    Macklin, Jillian
    Pinto, Andrew
    [J]. BMC HEALTH SERVICES RESEARCH, 2023, 23 (01)
  • [2] AI H, 2019, Eur Comm
  • [3] Ala-Pietila P., 2020, The Assessment List for Trustworthy Artificial Intelligence (ALTAI)
  • [4] A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion
    Albahri, A. S.
    Duhaim, Ali M.
    Fadhel, Mohammed A.
    Alnoor, Alhamzah
    Baqer, Noor S.
    Alzubaidi, Laith
    Albahri, O. S.
    Alamoodi, A. H.
    Bai, Jinshuai
    Salhi, Asma
    Santamaria, Jose
    Ouyang, Chun
    Gupta, Ashish
    Gu, Yuantong
    Deveci, Muhammet
    [J]. INFORMATION FUSION, 2023, 96 : 156 - 191
  • [5] Expectations and attitudes towards medical artificial intelligence: A qualitative study in the field of stroke
    Amann, Julia
    Vayena, Effy
    Ormond, Kelly E.
    Frey, Dietmar
    Madai, Vince I.
    Blasimme, Alessandro
    [J]. PLOS ONE, 2023, 18 (01):
  • [6] Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
    Amann, Julia
    Blasimme, Alessandro
    Vayena, Effy
    Frey, Dietmar
    Madai, Vince I.
    [J]. BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (01)
  • [7] Armoundas AA, 2024, Circulation
  • [8] Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
    Ayers, John W.
    Poliak, Adam
    Dredze, Mark
    Leas, Eric C.
    Zhu, Zechariah
    Kelley, Jessica B.
    Faix, Dennis J.
    Goodman, Aaron M.
    Longhurst, Christopher A.
    Hogarth, Michael
    Smith, Davey M.
    [J]. JAMA INTERNAL MEDICINE, 2023, 183 (06) : 589 - 596
  • [9] Cardiac self-efficacy and quality of life in patients with coronary heart disease: a cross-sectional study from Palestine
    Barham, Aya
    Ibraheem, Reem
    Zyoud, Sa'ed H.
    [J]. BMC CARDIOVASCULAR DISORDERS, 2019, 19 (01)
  • [10] Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons
    Benzinger, Lasse
    Ursin, Frank
    Balke, Wolf-Tilo
    Kacprowski, Tim
    Salloch, Sabine
    [J]. BMC MEDICAL ETHICS, 2023, 24 (01)