Ethics and Philosophy Lab, Cluster of Excellence "Machine Learning: New Perspectives for Science", University of Tübingen, Tübingen D-72076, Germany
J Med Ethics. 2022 Nov;48(11):899-906. doi: 10.1136/medethics-2020-107166. Epub 2021 May 14.
In recent years, there has been a surge of high-profile publications on applications of artificial intelligence (AI) systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials-particularly randomised controlled trials (RCTs)-is gaining ground. However, an issue that has been overlooked so far in the debate is that, compared with drug RCTs, AI RCTs require methodological adjustments, which entail ethical challenges. This paper sets out to develop a systematic account of the ethics of AI RCTs by focusing on the moral principles of clinical equipoise, informed consent and fairness. This way, the objective is to animate further debate on the (research) ethics of medical AI.
近年来,有大量关于人工智能(AI)系统在医学诊断和预后中的应用的高影响力出版物。虽然 AI 为医学实践提供了各种机会,但人们越来越达成共识,即现有研究存在相当大的缺陷,无法确定 AI 系统的临床获益。因此,认为需要在临床试验中研究 AI 系统的临床获益的观点——特别是随机对照试验(RCT)——正在得到认可。然而,迄今为止,在这场辩论中被忽视的一个问题是,与药物 RCT 相比,AI RCT 需要进行方法上的调整,这涉及到伦理挑战。本文旨在通过关注临床均衡、知情同意和公平的道德原则,对 AI RCT 的伦理问题进行系统的阐述。这样,目的是激发对医学 AI 的(研究)伦理的进一步讨论。