Institute for Biomedical Ethics, University of Basel, Basel, Switzerland.
Care and Public Health Research Institute, Maastricht University, Maastricht, Netherlands.
BMC Med Ethics. 2024 Jan 23;25(1):10. doi: 10.1186/s12910-023-01000-0.
While the theoretical benefits and harms of Artificial Intelligence (AI) have been widely discussed in academic literature, empirical evidence remains elusive regarding the practical ethical challenges of developing AI for healthcare. Bridging the gap between theory and practice is an essential step in understanding how to ethically align AI for healthcare. Therefore, this research examines the concerns and challenges perceived by experts in developing ethical AI that addresses the healthcare context and needs.
We conducted semi-structured interviews with 41 AI experts and analyzed the data using reflective thematic analysis.
We developed three themes that expressed the considerations perceived by experts as essential for ensuring AI aligns with ethical practices within healthcare. The first theme explores the ethical significance of introducing AI with a clear and purposeful objective. The second theme focuses on how experts are concerned about the tension that exists between economic incentives and the importance of prioritizing the interests of doctors and patients. The third theme illustrates the need to develop context-sensitive AI for healthcare that is informed by its underlying theoretical foundations.
The three themes collectively emphasized that beyond being innovative, AI must genuinely benefit healthcare and its stakeholders, meaning AI also aligns with intricate and context-specific healthcare practices. Our findings signal that instead of narrow product-specific AI guidance, ethical AI development may need a systemic, proactive perspective that includes the ethical considerations (objectives, actors, and context) and focuses on healthcare applications. Ethically developing AI involves a complex interplay between AI, ethics, healthcare, and multiple stakeholders.
虽然人工智能(AI)的理论益处和危害在学术文献中已被广泛讨论,但关于开发用于医疗保健的 AI 所面临的实际伦理挑战的实证证据仍然难以捉摸。弥合理论与实践之间的差距是理解如何使医疗保健中的 AI 符合伦理要求的重要步骤。因此,本研究考察了开发解决医疗保健背景和需求的道德 AI 的专家所感知的关注和挑战。
我们对 41 名 AI 专家进行了半结构化访谈,并使用反思性主题分析对数据进行了分析。
我们开发了三个主题,表达了专家认为对于确保 AI 在医疗保健中符合道德实践至关重要的考虑因素。第一个主题探讨了引入具有明确和有目的目标的 AI 的伦理意义。第二个主题侧重于专家如何关注经济激励与优先考虑医生和患者利益之间的紧张关系。第三个主题说明了为医疗保健开发基于理论基础的情境敏感 AI 的必要性。
这三个主题共同强调,AI 不仅要有创新性,还必须真正有益于医疗保健及其利益相关者,这意味着 AI 也与复杂和特定于情境的医疗保健实践相一致。我们的研究结果表明,伦理 AI 开发可能需要一种系统的、积极主动的视角,包括伦理考虑因素(目标、参与者和情境),并专注于医疗保健应用,而不是狭隘的针对特定产品的 AI 指导。开发道德 AI 需要 AI、伦理、医疗保健和多个利益相关者之间的复杂相互作用。