Builoff Valerie, Shanbhag Aakash, Miller Robert Jh, Dey Damini, Liang Joanna X, Flood Kathleen, Bourque Jamieson M, Chareonthaitawee Panithaya, Phillips Lawrence M, Slomka Piotr J
Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA; Signal and Image Processing Institute, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA.
J Nucl Cardiol. 2025 Mar;45:102089. doi: 10.1016/j.nuclcard.2024.102089. Epub 2024 Nov 29.
Previous studies evaluated the ability of large language models (LLMs) in medical disciplines; however, few have focused on image analysis, and none specifically on cardiovascular imaging or nuclear cardiology. This study assesses four LLMs-GPT-4, GPT-4 Turbo, GPT-4omni (GPT-4o) (Open AI), and Gemini (Google Inc.)-in responding to questions from the 2023 American Society of Nuclear Cardiology Board Preparation Exam, reflecting the scope of the Certification Board of Nuclear Cardiology (CBNC) examination.
We used 168 questions: 141 text-only and 27 image-based, categorized into four sections mirroring the CBNC exam. Each LLM was presented with the same standardized prompt and applied to each section 30 times to account for stochasticity. Performance over six weeks was assessed for all models except GPT-4o. McNemar's test compared correct response proportions.
GPT-4, Gemini, GPT-4 Turbo, and GPT-4o correctly answered median percentages of 56.8% (95% confidence interval 55.4% - 58.0%), 40.5% (39.9% - 42.9%), 60.7% (59.5% - 61.3%), and 63.1% (62.5%-64.3%) of questions, respectively. GPT-4o significantly outperformed other models (P = .007 vs GPT-4 Turbo, P < .001 vs GPT-4 and Gemini). GPT-4o excelled on text-only questions compared to GPT-4, Gemini, and GPT-4 Turbo (P < .001, P < .001, and P = .001), while Gemini performed worse on image-based questions (P < .001 for all).
GPT-4o demonstrated superior performance among the four LLMs, achieving scores likely within or just outside the range required to pass a test akin to the CBNC examination. Although improvements in medical image interpretation are needed, GPT-4o shows potential to support physicians in answering text-based clinical questions.
先前的研究评估了大语言模型(LLMs)在医学领域的能力;然而,很少有研究关注图像分析,且没有专门针对心血管成像或核心脏病学的研究。本研究评估了四种大语言模型——GPT-4、GPT-4 Turbo、GPT-4omni(GPT-4o)(OpenAI)和Gemini(谷歌公司)——对2023年美国核心脏病学会董事会备考考试问题的回答情况,这些问题反映了核心脏病学认证委员会(CBNC)考试的范围。
我们使用了168个问题:141个纯文本问题和27个基于图像的问题,分为四个部分,与CBNC考试相对应。每个大语言模型都收到相同的标准化提示,并对每个部分应用30次以考虑随机性。除GPT-4o外,对所有模型进行了为期六周的性能评估。使用McNemar检验比较正确回答比例。
GPT-4、Gemini、GPT-4 Turbo和GPT-4o正确回答问题的中位数百分比分别为56.8%(95%置信区间55.4% - 58.0%)、40.5%(39.9% - 42.9%)、60.7%(59.5% - 61.3%)和63.1%(62.5% - 64.3%)。GPT-4o明显优于其他模型(与GPT-4 Turbo相比,P = 0.007;与GPT-4和Gemini相比,P < 0.001)。与GPT-4、Gemini和GPT-4 Turbo相比,GPT-4o在纯文本问题上表现出色(P < 0.001、P < 0.001和P = 0.001),而Gemini在基于图像的问题上表现较差(所有比较P < 0.001)。
GPT-4o在四种大语言模型中表现出卓越的性能,其得分可能在类似于CBNC考试的测试通过所需分数范围内或略超出该范围。尽管医学图像解读仍需改进,但GPT-4o显示出支持医生回答基于文本的临床问题的潜力。