Kayastha Ankur, Lakshmanan Kirthika, Valentine Michael J, Nguyen Anh, Dholakia Kaushal, Wang Daniel
Kansas City University, Kansas City, MO, United States.
MedStar Health, Baltimore, MD, United States.
N Am Spine Soc J. 2024 Jun 1;19:100333. doi: 10.1016/j.xnsj.2024.100333. eCollection 2024 Sep.
ChatGPT is an advanced language AI able to generate responses to clinical questions regarding lumbar disc herniation with radiculopathy. Artificial intelligence (AI) tools are increasingly being considered to assist clinicians in decision-making. This study compared ChatGPT-3.5 and ChatGPT-4.0 responses to established NASS clinical guidelines and evaluated concordance.
ChatGPT-3.5 and ChatGPT-4.0 were prompted with fifteen questions from The 2012 NASS Clinical Guidelines for the diagnosis and treatment of lumbar disc herniation with radiculopathy. Clinical questions organized into categories were directly entered as unmodified queries into ChatGPT. Language output was assessed by two independent authors on September 26, 2023 based on operationally-defined parameters of accuracy, over-conclusiveness, supplementary, and incompleteness. ChatGPT-3.5 and ChatGPT-4.0 performance was compared via chi-square analyses.
Among the fifteen responses produced by ChatGPT-3.5, 7 (47%) were accurate, 7 (47%) were over-conclusive, fifteen (100%) were supplementary, and 6 (40%) were incomplete. For ChatGPT-4.0, ten (67%) were accurate, 5 (33%) were over-conclusive, 10 (67%) were supplementary, and 6 (40%) were incomplete. There was a statistically significant difference in supplementary information (100% vs. 67%; p=.014) between ChatGPT-3.5 and ChatGPT-4.0. Accuracy (47% vs. 67%; p=.269), over-conclusiveness (47% vs. 33%; p=.456), and incompleteness (40% vs. 40%; p=1.000) did not show significant differences between ChatGPT-3.5 and ChatGPT-4.0. ChatGPT-3.5 and ChatGPT-4.0 both yielded 100% accuracy for definition and history and physical examination categories. Diagnostic testing yielded 0% accuracy for ChatGPT-3.5 and 100% accuracy for ChatGPT-4.0. Nonsurgical interventions had 50% accuracy for ChatGPT-3.5 and 63% accuracy for ChatGPT-4.0. Surgical interventions resulted in 0% accuracy for ChatGPT-3.5 and 33% accuracy for ChatGPT-4.0.
ChatGPT-4.0 provided less supplementary information and overall higher accuracy in question categories than ChatGPT-3.5. ChatGPT showed reasonable concordance to NASS guidelines, but clinicians should caution use of ChatGPT in its current state as it fails to safeguard against misinformation.
ChatGPT是一种先进的语言人工智能,能够生成针对伴有神经根病的腰椎间盘突出症临床问题的回答。人工智能(AI)工具越来越多地被考虑用于协助临床医生进行决策。本研究比较了ChatGPT-3.5和ChatGPT-4.0对已确立的北美脊柱外科学会(NASS)临床指南的回答,并评估了一致性。
根据2012年NASS关于伴有神经根病的腰椎间盘突出症诊断和治疗的临床指南中的15个问题对ChatGPT-3.5和ChatGPT-4.0进行提问。按类别组织的临床问题直接作为未修改的查询输入到ChatGPT中。2023年9月26日,由两名独立作者根据准确性、过度结论性、补充性和不完整性等操作定义的参数对语言输出进行评估。通过卡方分析比较ChatGPT-3.5和ChatGPT-4.0的表现。
在ChatGPT-3.5给出的15个回答中,7个(47%)准确,7个(47%)过度结论性,15个(100%)有补充内容,6个(40%)不完整。对于ChatGPT-4.0,10个(67%)准确,5个(33%)过度结论性,10个(67%)有补充内容,6个(40%)不完整。ChatGPT-3.5和ChatGPT-4.0在补充信息方面存在统计学显著差异(100%对67%;p = 0.014)。准确性(47%对67%;p = 0.269)、过度结论性(47%对33%;p = 0.456)和不完整性(40%对40%;p = 1.000)在ChatGPT-3.5和ChatGPT-4.0之间未显示出显著差异。ChatGPT-3.5和ChatGPT-4.0在定义、病史和体格检查类别上的准确率均为100%。ChatGPT-3.5在诊断测试方面的准确率为0%,ChatGPT-4.0为100%。ChatGPT-3.5在非手术干预方面的准确率为50%,ChatGPT-4.0为63%。ChatGPT-3.5在手术干预方面的准确率为0%,ChatGPT-4.0为33%。
与ChatGPT-3.5相比,ChatGPT-4.0在问题类别中提供的补充信息较少,但总体准确性更高。ChatGPT与NASS指南显示出合理的一致性,但临床医生应谨慎使用当前状态的ChatGPT,因为它无法防范错误信息。