Collins Benjamin X, Bélisle-Pipon Jean-Christophe, Evans Barbara J, Ferryman Kadija, Jiang Xiaoqian, Nebeker Camille, Novak Laurie, Roberts Kirk, Were Martin, Yin Zhijun, Ravitsky Vardit, Coco Joseph, Hendricks-Sturrup Rachele, Williams Ishan, Clayton Ellen W, Malin Bradley A
Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
JAMIA Open. 2024 Nov 15;7(4):ooae108. doi: 10.1093/jamiaopen/ooae108. eCollection 2024 Dec.
Artificial intelligence (AI) proceeds through an iterative and evaluative process of development, use, and refinement which may be characterized as a lifecycle. Within this context, stakeholders can vary in their interests and perceptions of the ethical issues associated with this rapidly evolving technology in ways that can fail to identify and avert adverse outcomes. Identifying issues throughout the AI lifecycle in a systematic manner can facilitate better-informed ethical deliberation.
We analyzed existing lifecycles from within the current literature for ethical issues of AI in healthcare to identify themes, which we relied upon to create a lifecycle that consolidates these themes into a more comprehensive lifecycle. We then considered the potential benefits and harms of AI through this lifecycle to identify ethical questions that can arise at each step and to identify where conflicts and errors could arise in ethical analysis. We illustrated the approach in 3 case studies that highlight how different ethical dilemmas arise at different points in the lifecycle.
Through case studies, we show how a systematic lifecycle-informed approach to the ethical analysis of AI enables mapping of the effects of AI onto different steps to guide deliberations on benefits and harms. The lifecycle-informed approach has broad applicability to different stakeholders and can facilitate communication on ethical issues for patients, healthcare professionals, research participants, and other stakeholders.
人工智能(AI)通过一个迭代和评估的开发、使用及优化过程推进,这一过程可被视为一个生命周期。在此背景下,利益相关者在与这项快速发展的技术相关的伦理问题上,其兴趣和认知可能存在差异,从而可能无法识别和避免不良后果。以系统的方式识别人工智能生命周期中的问题,有助于进行更明智的伦理思考。
我们分析了当前文献中现有的关于医疗保健领域人工智能伦理问题的生命周期,以确定主题,并在此基础上创建了一个将这些主题整合为更全面生命周期的模型。然后,我们通过这个生命周期来考量人工智能的潜在益处和危害,以识别每个阶段可能出现的伦理问题,以及在伦理分析中可能出现冲突和错误的地方。我们在3个案例研究中展示了这种方法,突出了在生命周期的不同阶段如何出现不同的伦理困境。
结果、讨论与结论:通过案例研究,我们展示了一种基于生命周期的系统方法如何用于人工智能的伦理分析,从而将人工智能的影响映射到不同阶段,以指导对益处和危害的思考。这种基于生命周期的方法对不同利益相关者具有广泛适用性,并能促进患者、医疗保健专业人员、研究参与者及其他利益相关者就伦理问题进行沟通。