Keesler Medical Center, Keesler Air Force Base, Biloxi, MS, USA.
Attorney, Mountain Home, ID, USA.
J Osteopath Med. 2024 Jan 31;124(7):287-290. doi: 10.1515/jom-2023-0229. eCollection 2024 Jul 1.
The emergence of generative large language model (LLM) artificial intelligence (AI) represents one of the most profound developments in healthcare in decades, with the potential to create revolutionary and seismic changes in the practice of medicine as we know it. However, significant concerns have arisen over questions of liability for bad outcomes associated with LLM AI-influenced medical decision making. Although the authors were not able to identify a case in the United States that has been adjudicated on medical malpractice in the context of LLM AI at this time, sufficient precedent exists to interpret how analogous situations might be applied to these cases when they inevitably come to trial in the future. This commentary will discuss areas of potential legal vulnerability for clinicians utilizing LLM AI through review of past case law pertaining to third-party medical guidance and review the patchwork of current regulations relating to medical malpractice liability in AI. Finally, we will propose proactive policy recommendations including creating an enforcement duty at the US Food and Drug Administration (FDA) to require algorithmic transparency, recommend reliance on peer-reviewed data and rigorous validation testing when LLMs are utilized in clinical settings, and encourage tort reform to share liability between physicians and LLM developers.
生成式大型语言模型 (LLM) 人工智能 (AI) 的出现代表了几十年来医疗保健领域最深远的发展之一,有可能在我们所知的医学实践中带来革命性和地震性的变化。然而,人们对与 LLM AI 影响下的医疗决策相关的不良结果的责任问题提出了重大关注。尽管作者目前在美国没有发现与 LLM AI 相关的医疗事故案件已被裁决,但存在足够的先例来解释当这些案件在未来不可避免地进入审判时,类似的情况可能会如何适用。本评论将通过审查过去与第三方医疗指导相关的案例法,讨论临床医生使用 LLM AI 的潜在法律漏洞领域,并审查与人工智能医疗事故责任相关的现行法规的拼凑情况。最后,我们将提出积极的政策建议,包括在美国食品和药物管理局 (FDA) 建立执法职责,要求算法透明性,当 LLM 在临床环境中使用时,建议依赖同行评议的数据和严格的验证测试,并鼓励侵权改革,在医生和 LLM 开发者之间分担责任。