Suppr超能文献

评估人工智能在核心脏病学方面的熟练程度:大型语言模型参加资格考试。

Evaluating AI proficiency in nuclear cardiology: Large language models take on the board preparation exam.

作者信息

Builoff Valerie, Shanbhag Aakash, Miller Robert Jh, Dey Damini, Liang Joanna X, Flood Kathleen, Bourque Jamieson M, Chareonthaitawee Panithaya, Phillips Lawrence M, Slomka Piotr J

机构信息

Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA.

Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA; Signal and Image Processing Institute, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA.

出版信息

J Nucl Cardiol. 2025 Mar;45:102089. doi: 10.1016/j.nuclcard.2024.102089. Epub 2024 Nov 29.

Abstract

BACKGROUND

Previous studies evaluated the ability of large language models (LLMs) in medical disciplines; however, few have focused on image analysis, and none specifically on cardiovascular imaging or nuclear cardiology. This study assesses four LLMs-GPT-4, GPT-4 Turbo, GPT-4omni (GPT-4o) (Open AI), and Gemini (Google Inc.)-in responding to questions from the 2023 American Society of Nuclear Cardiology Board Preparation Exam, reflecting the scope of the Certification Board of Nuclear Cardiology (CBNC) examination.

METHODS

We used 168 questions: 141 text-only and 27 image-based, categorized into four sections mirroring the CBNC exam. Each LLM was presented with the same standardized prompt and applied to each section 30 times to account for stochasticity. Performance over six weeks was assessed for all models except GPT-4o. McNemar's test compared correct response proportions.

RESULTS

GPT-4, Gemini, GPT-4 Turbo, and GPT-4o correctly answered median percentages of 56.8% (95% confidence interval 55.4% - 58.0%), 40.5% (39.9% - 42.9%), 60.7% (59.5% - 61.3%), and 63.1% (62.5%-64.3%) of questions, respectively. GPT-4o significantly outperformed other models (P = .007 vs GPT-4 Turbo, P < .001 vs GPT-4 and Gemini). GPT-4o excelled on text-only questions compared to GPT-4, Gemini, and GPT-4 Turbo (P < .001, P < .001, and P = .001), while Gemini performed worse on image-based questions (P < .001 for all).

CONCLUSION

GPT-4o demonstrated superior performance among the four LLMs, achieving scores likely within or just outside the range required to pass a test akin to the CBNC examination. Although improvements in medical image interpretation are needed, GPT-4o shows potential to support physicians in answering text-based clinical questions.

摘要

背景

先前的研究评估了大语言模型(LLMs)在医学领域的能力;然而,很少有研究关注图像分析,且没有专门针对心血管成像或核心脏病学的研究。本研究评估了四种大语言模型——GPT-4、GPT-4 Turbo、GPT-4omni(GPT-4o)(OpenAI)和Gemini(谷歌公司)——对2023年美国核心脏病学会董事会备考考试问题的回答情况,这些问题反映了核心脏病学认证委员会(CBNC)考试的范围。

方法

我们使用了168个问题:141个纯文本问题和27个基于图像的问题,分为四个部分,与CBNC考试相对应。每个大语言模型都收到相同的标准化提示,并对每个部分应用30次以考虑随机性。除GPT-4o外,对所有模型进行了为期六周的性能评估。使用McNemar检验比较正确回答比例。

结果

GPT-4、Gemini、GPT-4 Turbo和GPT-4o正确回答问题的中位数百分比分别为56.8%(95%置信区间55.4% - 58.0%)、40.5%(39.9% - 42.9%)、60.7%(59.5% - 61.3%)和63.1%(62.5% - 64.3%)。GPT-4o明显优于其他模型(与GPT-4 Turbo相比,P = 0.007;与GPT-4和Gemini相比,P < 0.001)。与GPT-4、Gemini和GPT-4 Turbo相比,GPT-4o在纯文本问题上表现出色(P < 0.001、P < 0.001和P = 0.001),而Gemini在基于图像的问题上表现较差(所有比较P < 0.001)。

结论

GPT-4o在四种大语言模型中表现出卓越的性能,其得分可能在类似于CBNC考试的测试通过所需分数范围内或略超出该范围。尽管医学图像解读仍需改进,但GPT-4o显示出支持医生回答基于文本的临床问题的潜力。

相似文献

本文引用的文献

9
Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations.ChatGPT和GPT-4在神经外科笔试中的表现。
Neurosurgery. 2023 Dec 1;93(6):1353-1365. doi: 10.1227/neu.0000000000002632. Epub 2023 Aug 15.
10
Large language models in medicine.医学中的大型语言模型。
Nat Med. 2023 Aug;29(8):1930-1940. doi: 10.1038/s41591-023-02448-8. Epub 2023 Jul 17.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验