Center for Bioethics and Society, Department of Bioinformatics, Vanderbilt University Medical Center, Vanderbilt University, Nashville, TN.
Department of Social Medicine, Ohio University-Heritage College of Osteopathic Medicine, Athens, OH.
Chest. 2024 Sep;166(3):572-578. doi: 10.1016/j.chest.2024.04.014. Epub 2024 May 22.
Artificial intelligence (AI) is increasingly being used in health care. Without an ethically supportable, standard approach to knowing when patients should be informed about AI, hospital systems and clinicians run the risk of fostering mistrust among their patients and the public. Therefore, hospital leaders need guidance on when to tell patients about the use of AI in their care. In this article, we provide such guidance. To determine which AI technologies fall into each of the identified categories (no notification or no informed consent [IC], notification only, and formal IC), we propose that AI use-cases should be evaluated using the following criteria: (1) AI model autonomy, (2) departure from standards of practice, (3) whether the AI model is patient facing, (4) clinical risk introduced by the model, and (5) administrative burdens. We take each of these in turn, using a case example of AI in health care to illustrate our proposed framework. As AI becomes more commonplace in health care, our proposal may serve as a starting point for creating consensus on standards for notification and IC for the use of AI in patient care.
人工智能(AI)在医疗保健领域的应用日益广泛。如果没有一种在道德上可以支持的、告知患者何时应了解 AI 的标准方法,医院系统和临床医生可能会在患者和公众中培养不信任感。因此,医院领导者需要指导何时告知患者 AI 在其护理中的使用情况。在本文中,我们提供了这样的指导。为了确定哪些 AI 技术属于已确定的类别(无需通知或无需获得知情同意[IC]、仅通知和正式 IC),我们建议使用以下标准评估 AI 用例:(1)AI 模型自主性,(2)偏离实践标准,(3)AI 模型是否面向患者,(4)模型带来的临床风险,以及(5)行政负担。我们依次讨论了这些标准,并用医疗保健中 AI 的案例来说明我们提出的框架。随着 AI 在医疗保健中的应用越来越普遍,我们的建议可能成为在 AI 用于患者护理的通知和知情同意标准方面达成共识的起点。