Petersson Lena, Vincent Kalista, Svedberg Petra, Nygren Jens M, Larsson Ingrid
School of Health and Welfare, Halmstad University, Halmstad, Sweden.
Digit Health. 2023 Oct 11;9:20552076231206588. doi: 10.1177/20552076231206588. eCollection 2023 Jan-Dec.
Artificial intelligence (AI) is predicted to be a solution for improving healthcare, increasing efficiency, and saving time and recourses. A lack of ethical principles for the use of AI in practice has been highlighted by several stakeholders due to the recent attention given to it. Research has shown an urgent need for more knowledge regarding the ethical implications of AI applications in healthcare. However, fundamental ethical principles may not be sufficient to describe ethical concerns associated with implementing AI applications.
The aim of this study is twofold, (1) to use the implementation of AI applications to predict patient mortality in emergency departments as a setting to explore healthcare professionals' perspectives on ethical issues in relation to ethical principles and (2) to develop a model to guide ethical considerations in AI implementation in healthcare based on ethical theory.
Semi-structured interviews were conducted with 18 participants. The abductive approach used to analyze the empirical data consisted of four steps alternating between inductive and deductive analyses.
Our findings provide an ethical model demonstrating the need to address six ethical principles (autonomy, beneficence, non-maleficence, justice, explicability, and professional governance) in relation to ethical theories defined as virtue, deontology, and consequentialism when AI applications are to be implemented in clinical practice.
Ethical aspects of AI applications are broader than the prima facie principles of medical ethics and the principle of explicability. Ethical aspects thus need to be viewed from a broader perspective to cover different situations that healthcare professionals, in general, and physicians, in particular, may face when using AI applications in clinical practice.
人工智能(AI)被认为是改善医疗保健、提高效率以及节省时间和资源的一种解决方案。由于近期对其的关注,一些利益相关者强调了在实践中使用人工智能缺乏伦理原则的问题。研究表明,迫切需要更多关于人工智能在医疗保健应用中的伦理影响的知识。然而,基本的伦理原则可能不足以描述与实施人工智能应用相关的伦理问题。
本研究的目的有两个,(1)以人工智能应用于急诊科预测患者死亡率为背景,探讨医疗保健专业人员对与伦理原则相关的伦理问题的看法;(2)基于伦理理论开发一个模型,以指导医疗保健中人工智能实施的伦理考量。
对18名参与者进行了半结构化访谈。用于分析实证数据的溯因方法包括在归纳分析和演绎分析之间交替的四个步骤。
我们的研究结果提供了一个伦理模型,表明在临床实践中实施人工智能应用时,需要针对被定义为美德论、义务论和后果论的伦理理论,处理六个伦理原则(自主性、 beneficence(原文有误,应为beneficence,意为有益、行善)、不伤害、公正、可解释性和专业治理)。
人工智能应用的伦理方面比医学伦理的表面原则和可解释性原则更为广泛。因此,需要从更广泛的角度看待伦理方面,以涵盖医疗保健专业人员,特别是医生在临床实践中使用人工智能应用时可能面临的不同情况。