AI-Assisted Decision-Making in Long-Term Care: Qualitative Study on Prerequisites for Responsible Innovation

被引:1
作者
Lukkien, Dirk R. M. [1 ,2 ]
Stolwijk, Nathalie E. [1 ]
Askari, Sima Ipakchian [1 ,3 ]
Hofstede, Bob M. [1 ,3 ]
Nap, Henk Herman [1 ,3 ]
Boon, Wouter P. C. [2 ]
Peine, Alexander [4 ]
Moors, Ellen H. M. [2 ]
Minkman, Mirella M. N. [1 ,5 ]
机构
[1] Vilans Ctr Expertise Long Term Care, Churchilllaan 11, NL-3505 RE Utrecht, Netherlands
[2] Univ Utrecht, Copernicus Inst Sustainable Dev, Utrecht, Netherlands
[3] Eindhoven Univ Technol, Human Technol Interact, Eindhoven, Netherlands
[4] Open Univ Netherlands, Fac Humanities, Heerlen, Netherlands
[5] Tilburg Univ, TIAS Sch Business & Soc, Tilburg, Netherlands
来源
JMIR NURSING | 2024年 / 7卷
关键词
decision support systems; ethics; long-term care; responsible innovation; stakeholder perspectives; SUPPORT-SYSTEMS; AUTOMATION; HEALTH; BIAS;
D O I
10.2196/55962
中图分类号
R47 [护理学];
学科分类号
1011 ;
摘要
Background: Although the use of artificial intelligence (AI)-based technologies, such as AI-based decision support systems (AI-DSSs), can help sustain and improvethe quality and efficiency of care, their deployment creates ethical and social challenges. In recent years, a growing prevalence of high-level guidelines and frameworks for responsible AI innovation has been observed. However, few studies have specified the responsible embedding of AI-based technologies, such as AI-DSSs, in specific contexts, such as the nursing process in long-term care (LTC) for older adults. Objective: Prerequisites for responsible AI-assisted decision-making in nursing practice were explored from the perspectives of nurses and other professional stakeholders in LTC. Methods: Semistructured interviews were conducted with 24 care professionalsin Dutch LTC, including nurses, care coordinators, data specialists, and care centralists. A total of 2 imaginary scenarios about AI-DSSs were developed beforehand and used to enable participants articulate their expectations regarding the opportunities and risks of AI-assisted decision-making. In addition, 6 high-level principles for responsible AI were used as probing themes to evoke further consideration of the risks associated with using AI-DSSs in LTC. Furthermore, the participants were asked to brainstorm possible strategies and actions in the design, implementation, and use of AI-DSSs to address or mitigate these risks. A thematic analysis was performed to identify the opportunities and risks of AI-assisted decision-making in nursing practice and the associated prerequisites for responsible innovation in this area. Results: The stance of care professionals on the use of AI-DSSs is not a matter of purely positive or negative expectations but rather a nuanced interplay of positive and negative elements that lead to a weighed perception of the prerequisites for responsible AI-assisted decision-making. Both opportunities and risks were identified in relation to the early identification of care needs, guidance in devising care strategies, shared decision-making, and the workload of and work experience of caregivers. To optimally balance the opportunities and risks of AI-assisted decision-making, seven categories of prerequisites for responsible AI-assisted decision-making in nursing practice were identified: (1) regular deliberation on data collection; (2) a balanced proactive nature of AI-DSSs; (3) incremental advancements aligned with trust and experience; (4) customization for alluser groups, including clients and caregivers; (5) measures to counteract bias and narrow perspectives; (6) human-centric learning loops; and (7) the routinization of using AI-DSSs. Conclusions:The opportunities of AI-assisted decision-making in nursing practice could turn into drawbacks depending on the specific shaping of the design and deployment of AI-DSSs. Therefore, we recommend considering the responsible use of AI-DSSs as a balancing act. Moreover, considering the interrelatedness of the identified prerequisites, we call for various actors, including developers and users of AI-DSSs, to cohesively address the different factors important to the responsible embedding of AI-DSSs in practice.
引用
收藏
页数:17
相关论文
共 80 条
[11]   Guidelines and quality criteria for artificial intelligence-based prediction models in healthcare: a scoping review [J].
de Hond, Anne A. H. ;
Leeuwenberg, Artuur M. ;
Hooft, Lotty ;
Kant, Ilse M. J. ;
Nijman, Steven W. J. ;
van Os, Hendrikus J. A. ;
Aardoom, Jiska J. ;
Debray, Thomas P. A. ;
Schuit, Ewoud ;
van Smeden, Maarten ;
Reitsma, Johannes B. ;
Steyerberg, Ewout W. ;
Chavannes, Niels H. ;
Moons, Karel G. M. .
NPJ DIGITAL MEDICINE, 2022, 5 (01)
[12]   The Role of XAI in Advice-Taking from a Clinical Decision Support System: A Comparative User Study of Feature Contribution-Based and Example-Based Explanations [J].
Du, Yuhan ;
Antoniadi, Anna Markella ;
McNestry, Catherine ;
McAuliffe, Fionnuala M. ;
Mooney, Catherine .
APPLIED SCIENCES-BASEL, 2022, 12 (20)
[13]   Computerized Care-Pathways (CCPs) System to Support Person-Centered, Integrated, and Proactive Care in Home-Care Settings [J].
Dubuc, Nicole ;
Briere, Simon ;
Corbin, Cinthia ;
N'Bouke, Afiwa ;
Bonin, Lucie ;
Delli-Colli, Nathalie .
INFORMATICS FOR HEALTH & SOCIAL CARE, 2021, 46 (01) :100-111
[14]  
Egelman Serge, 2015, P 2015 NEW SECURITY, P16, DOI DOI 10.1145/2841113.2841115
[15]  
El Morr C, 2019, SPR BR HEALTH CARE, P1, DOI 10.1007/978-3-030-04506-7_1
[16]  
Elizabeta B., 2020, Archives of Psychiatry and Mental Health, V4, P007, DOI DOI 10.29328/JOURNAL.APMH.1001011
[17]  
english.ccmo, Your research: Is it subject to the WMO or not?
[18]   Of robots and humans: Creating user representations in practice [J].
Fischer, Bjorn ;
Ostlund, Britt ;
Peine, Alexander .
SOCIAL STUDIES OF SCIENCE, 2020, 50 (02) :221-244
[19]  
Fjeld Jessica, 2020, Berkman Klein Cent Res Publ, V1, DOI [DOI 10.2139/SSRN.3518482, 10.2139/3518482, DOI 10.2139/3518482]
[20]   Accounting for diversity in AI for medicine [J].
Fosch-Villaronga, Eduard ;
Drukarch, Hadassah ;
Khanna, Pranav ;
Verhoef, Tessa ;
Custers, Bart .
COMPUTER LAW & SECURITY REVIEW, 2022, 47