van den Heuvel Julia, Porter Anthony, Kirkpatrick Emily, Verjans Johan, Reddy Sandeep, Freckelton Ian
Australian Institute for Machine Learning, University of Adelaide.
School of Public Health and Social Work, Queensland University of Technology.
J Law Med. 2025 Jun;32(1):74-84.
The integration of artificial intelligence (AI) into health care presents significant challenges for traditional informed consent practices. This review examines the legal and ethical implications of using AI in clinical decision-making, with a focus on maintaining transparency and respecting patient autonomy. While the legal framework for informed consent remains clear - requiring clinicians to provide sufficient information on material risks and likely outcomes - the complexity of AI introduces nuances that demand adaptation. Unlike surgical consent, where decisions are directly tied to human judgment, AI systems analyse vast datasets and identify patterns beyond human comprehension, complicating clinicians' ability to provide clear explanations. However, this does not necessitate a complete overhaul of informed consent but, rather, careful reassessment. Practical approaches include tiered consent protocols tailored to AI complexity and enhanced clinician education to bridge the communication gap. By addressing these challenges, informed consent can evolve to support ethical AI integration while preserving patient trust and decision-making.
将人工智能(AI)整合到医疗保健中给传统的知情同意做法带来了重大挑战。本综述探讨了在临床决策中使用AI的法律和伦理影响,重点是保持透明度和尊重患者自主权。虽然知情同意的法律框架仍然明确——要求临床医生提供有关重大风险和可能结果的充分信息——但AI的复杂性带来了需要适应的细微差别。与手术同意不同,手术同意的决策直接与人类判断相关,而AI系统分析大量数据集并识别超出人类理解范围的模式,这使得临床医生难以提供清晰的解释。然而,这并不需要彻底改革知情同意,而是需要仔细重新评估。实际方法包括根据AI的复杂性制定分层同意协议,以及加强临床医生教育以弥合沟通差距。通过应对这些挑战,知情同意可以不断发展,以支持符合伦理的AI整合,同时保持患者的信任和决策能力。