Schicktanz Silke, Welsch Johannes, Schweda Mark, Hein Andreas, Rieger Jochem W, Kirste Thomas
University Medical Center Göttingen, Department for Medical Ethics and History of Medicine, Göttingen, Germany.
Hanse-Wissenschaftskolleg, Institute of Advance Studies, Delmenhorst, Germany.
Front Genet. 2023 Jun 26;14:1039839. doi: 10.3389/fgene.2023.1039839. eCollection 2023.
Current ethical debates on the use of artificial intelligence (AI) in healthcare treat AI as a product of technology in three ways. First, by assessing risks and potential benefits of currently developed AI-enabled products with ethical checklists; second, by proposing ex ante lists of ethical values seen as relevant for the design and development of assistive technology, and third, by promoting AI technology to use moral reasoning as part of the automation process. The dominance of these three perspectives in the discourse is demonstrated by a brief summary of the literature. Subsequently, we propose a fourth approach to AI, namely, as a methodological tool to assist ethical reflection. We provide a concept of an AI-simulation informed by three separate elements: 1) stochastic human behavior models based on behavioral data for simulating realistic settings, 2) qualitative empirical data on value statements regarding internal policy, and 3) visualization components that aid in understanding the impact of changes in these variables. The potential of this approach is to inform an interdisciplinary field about anticipated ethical challenges or ethical trade-offs in concrete settings and, hence, to spark a re-evaluation of design and implementation plans. This may be particularly useful for applications that deal with extremely complex values and behavior or with limitations on the communication resources of affected persons (e.g., persons with dementia care or for care of persons with cognitive impairment). Simulation does not replace ethical reflection but does allow for detailed, context-sensitive analysis during the design process and prior to implementation. Finally, we discuss the inherently quantitative methods of analysis afforded by stochastic simulations as well as the potential for ethical discussions and how simulations with AI can improve traditional forms of thought experiments and future-oriented technology assessment.
当前关于人工智能(AI)在医疗保健领域应用的伦理辩论以三种方式将AI视为一种技术产物。首先,通过使用伦理清单评估当前开发的人工智能产品的风险和潜在益处;其次,通过提出被视为与辅助技术设计和开发相关的事前伦理价值清单;第三,通过推动人工智能技术将道德推理作为自动化过程的一部分。文献的简要总结表明了这三种观点在论述中的主导地位。随后,我们提出了对AI的第四种方法,即作为一种辅助伦理反思的方法工具。我们提供了一个由三个独立要素构成的人工智能模拟概念:1)基于行为数据的随机人类行为模型,用于模拟现实场景;2)关于内部政策价值陈述的定性实证数据;3)有助于理解这些变量变化影响的可视化组件。这种方法的潜力在于为一个跨学科领域提供有关具体场景中预期的伦理挑战或伦理权衡的信息,从而引发对设计和实施计划的重新评估。这对于处理极其复杂的价值观和行为或受影响者(如痴呆症患者护理或认知障碍患者护理)沟通资源有限的应用可能特别有用。模拟并不能取代伦理反思,但确实允许在设计过程中及实施之前进行详细的、针对具体情境的分析。最后,我们讨论了随机模拟所提供的内在定量分析方法以及伦理讨论的潜力,以及人工智能模拟如何能够改进传统形式的思想实验和面向未来的技术评估。