Vilans Centre of Expertise of Long Term Care, Utrecht, Netherlands.
Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands.
JMIR Nurs. 2024 Jul 25;7:e55962. doi: 10.2196/55962.
Although the use of artificial intelligence (AI)-based technologies, such as AI-based decision support systems (AI-DSSs), can help sustain and improve the quality and efficiency of care, their deployment creates ethical and social challenges. In recent years, a growing prevalence of high-level guidelines and frameworks for responsible AI innovation has been observed. However, few studies have specified the responsible embedding of AI-based technologies, such as AI-DSSs, in specific contexts, such as the nursing process in long-term care (LTC) for older adults.
Prerequisites for responsible AI-assisted decision-making in nursing practice were explored from the perspectives of nurses and other professional stakeholders in LTC.
Semistructured interviews were conducted with 24 care professionals in Dutch LTC, including nurses, care coordinators, data specialists, and care centralists. A total of 2 imaginary scenarios about AI-DSSs were developed beforehand and used to enable participants articulate their expectations regarding the opportunities and risks of AI-assisted decision-making. In addition, 6 high-level principles for responsible AI were used as probing themes to evoke further consideration of the risks associated with using AI-DSSs in LTC. Furthermore, the participants were asked to brainstorm possible strategies and actions in the design, implementation, and use of AI-DSSs to address or mitigate these risks. A thematic analysis was performed to identify the opportunities and risks of AI-assisted decision-making in nursing practice and the associated prerequisites for responsible innovation in this area.
The stance of care professionals on the use of AI-DSSs is not a matter of purely positive or negative expectations but rather a nuanced interplay of positive and negative elements that lead to a weighed perception of the prerequisites for responsible AI-assisted decision-making. Both opportunities and risks were identified in relation to the early identification of care needs, guidance in devising care strategies, shared decision-making, and the workload of and work experience of caregivers. To optimally balance the opportunities and risks of AI-assisted decision-making, seven categories of prerequisites for responsible AI-assisted decision-making in nursing practice were identified: (1) regular deliberation on data collection; (2) a balanced proactive nature of AI-DSSs; (3) incremental advancements aligned with trust and experience; (4) customization for all user groups, including clients and caregivers; (5) measures to counteract bias and narrow perspectives; (6) human-centric learning loops; and (7) the routinization of using AI-DSSs.
The opportunities of AI-assisted decision-making in nursing practice could turn into drawbacks depending on the specific shaping of the design and deployment of AI-DSSs. Therefore, we recommend considering the responsible use of AI-DSSs as a balancing act. Moreover, considering the interrelatedness of the identified prerequisites, we call for various actors, including developers and users of AI-DSSs, to cohesively address the different factors important to the responsible embedding of AI-DSSs in practice.
尽管人工智能(AI)为基础的技术的应用,如基于 AI 的决策支持系统(AI-DSS),可以帮助维持和提高护理的质量和效率,但它们的部署也带来了伦理和社会挑战。近年来,人们观察到越来越多的高级别指导方针和框架,用于负责任的 AI 创新。然而,很少有研究具体说明了如何在特定背景下(如老年人长期护理中的护理过程)将 AI 为基础的技术,如 AI-DSS,负责任地嵌入其中。
从护士和长期护理(LTC)中其他专业利益相关者的角度探讨护理实践中负责任的 AI 辅助决策的前提条件。
对荷兰 LTC 中的 24 名护理专业人员进行了半结构化访谈,包括护士、护理协调员、数据专家和护理中心工作人员。事先总共开发了 2 个关于 AI-DSS 的想象场景,并使用这些场景使参与者能够表达他们对 AI 辅助决策的机会和风险的期望。此外,还使用了 6 个高级别的负责任 AI 原则作为探测主题,以唤起对在 LTC 中使用 AI-DSS 相关风险的进一步考虑。此外,要求参与者集思广益,设计、实施和使用 AI-DSS 时可能采取的策略和行动,以解决或减轻这些风险。进行了主题分析,以确定护理实践中 AI 辅助决策的机会和风险,以及在该领域负责任创新的相关前提条件。
护理专业人员对使用 AI-DSS 的立场不是纯粹的积极或消极期望的问题,而是积极和消极因素的微妙相互作用,导致对负责任的 AI 辅助决策的前提条件进行权衡。在与早期识别护理需求、制定护理策略指导、共同决策以及护理人员的工作量和工作经验有关的方面,都确定了机会和风险。为了优化 AI 辅助决策的机会和风险的平衡,确定了护理实践中负责任的 AI 辅助决策的七个前提条件类别:(1)定期审议数据收集;(2)AI-DSS 的平衡主动性质;(3)与信任和经验相一致的渐进式进步;(4)为所有用户群体(包括客户和护理人员)定制;(5)采取措施消除偏见和狭隘视角;(6)以人为中心的学习循环;以及(7)使用 AI-DSS 的常规化。
护理实践中 AI 辅助决策的机会可能会因 AI-DSS 的设计和部署的具体形式而变成缺点。因此,我们建议将负责任地使用 AI-DSS 视为一种平衡行为。此外,考虑到所确定的前提条件的相互关联性,我们呼吁包括 AI-DSS 的开发人员和使用者在内的各种行为者,共同解决实践中 AI-DSS 负责任嵌入的重要因素。