Suppr超能文献

生成式人工智能能否提供准确的医疗建议?:以ChatGPT与神经外科医师协会急性颈椎和脊髓损伤临床指南管理为例

Can generative artificial intelligence provide accurate medical advice?: a case of ChatGPT versus Congress of Neurological Surgeons management of acute cervical spine and spinal cord injuries clinical guidelines.

作者信息

Saturno Michael, Mejia Mateo Restrepo, Ahmed Wasil, Yu Alexander, Duey Akiro, Zaidat Bashar, Hijji Fady, Markowitz Jonathan, Kim Jun, Cho Samuel

机构信息

Icahn School of Medicine at Mount Sinai, New York, NY, USA.

出版信息

Asian Spine J. 2025 Mar 4. doi: 10.31616/asj.2024.0301.

Abstract

STUDY DESIGN

An experimental study.

PURPOSE

To explore the concordance of ChatGPT responses with established national guidelines for the management of cervical spine and spinal cord injuries.

OVERVIEW OF LITERATURE

ChatGPT-4.0 is an artificial intelligence model that can synthesize large volumes of data and may provide surgeons with recommendations for the management of spinal cord injuries. However, no available literature has quantified ChatGPT's capacity to provide accurate recommendations for the management of cervical spine and spinal cord injuries.

METHODS

Referencing the "Management of acute cervical spine and spinal cord injuries" guidelines published by the Congress of Neurological Surgeons (CNS), a total of 36 questions were formulated. Questions were stratified into therapeutic, diagnostic, or clinical assessment categories as seen in the guidelines. Questions were secondarily grouped according to whether the corresponding recommendation contained level I evidence (highest quality) versus only level II/III evidence (moderate and low quality). ChatGPT-4.0 was prompted with each question, and its responses were assessed by two independent reviewers as "concordant" or "nonconcordant" with the CNS clinical guidelines. "Nonconcordant" responses were rationalized into "insufficient" and "contradictory" categories.

RESULTS

In this study, 22/36 (61.1%) of ChatGPT's responses were concordant with the CNS guidelines. ChatGPT's responses aligned with 17/24 (70.8%) therapeutic questions and 4/7 (57.1%) diagnostic questions. ChatGPT's response aligned with only one of the five clinical assessment questions. Notably, the recommendations supported by level I evidence were the least likely to be replicated by ChatGPT. ChatGPT's responses agreed with 80.8% of the recommendations supported exclusively by level II/III evidence.

CONCLUSIONS

ChatGPT-4 was moderately accurate when generating recommendations that aligned with the clinical guidelines. The model frequently aligned with low evidence and therapeutic recommendations but exhibited inferior performance on topics that contained high-quality evidence or pertained to diagnostic and clinical assessment strategies. Medical practitioners should monitor its usage until further models can be rigorously trained on medical data.

摘要

研究设计

一项实验性研究。

目的

探讨ChatGPT的回答与已确立的颈椎和脊髓损伤管理国家指南的一致性。

文献综述

ChatGPT-4.0是一种人工智能模型,能够综合大量数据,可能为外科医生提供脊髓损伤管理的建议。然而,尚无文献对ChatGPT为颈椎和脊髓损伤管理提供准确建议的能力进行量化。

方法

参照神经外科医师大会(CNS)发布的《急性颈椎和脊髓损伤的管理》指南,共提出36个问题。问题按照指南中的治疗、诊断或临床评估类别进行分层。问题再根据相应建议是否包含I级证据(最高质量)与仅II/III级证据(中等和低质量)进行二次分组。向ChatGPT-4.0提出每个问题,其回答由两名独立评审员评估为与CNS临床指南“一致”或“不一致”。“不一致”的回答被归类为“不充分”和“矛盾”类别。

结果

在本研究中,ChatGPT的22/36(61.1%)回答与CNS指南一致。ChatGPT的回答与17/24(70.8%)的治疗问题和4/7(57.1%)的诊断问题一致。ChatGPT的回答仅与五个临床评估问题中的一个一致。值得注意的是,由I级证据支持的建议最不可能被ChatGPT重复。ChatGPT的回答与仅由II/III级证据支持的80.8%的建议一致。

结论

ChatGPT-4在生成与临床指南一致的建议时具有中等准确性。该模型经常与低证据和治疗建议一致,但在包含高质量证据或涉及诊断和临床评估策略的主题上表现较差。在能够基于医学数据进行严格训练的进一步模型出现之前,医学从业者应监测其使用情况。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验