Hou Benjamin, Mukherjee Pritam, Batheja Vivek, Wang Kenneth C, Summers Ronald M, Lu Zhiyong
Division of Intramural Research, National Library of Medicine, National Institutes of Health, Bethesda, Md.
Imaging Biomarkers and Computer Aided Diagnosis Lab, Clinical Center, National Institutes of Health, 8600 Rockville Pike, Bldg 38A Lister Hill, Bethesda, MD 20894.
Radiology. 2025 Aug;316(2):e250617. doi: 10.1148/radiol.250617.
Background With the growing use of multimodal large language models (LLMs), numerous vision-enabled models have been developed and made available to the public. Purpose To assess and quantify the advancements of multimodal LLMs in interpreting radiologic quiz cases by examining both image and textual content over the course of 1 year, and to compare model performance with that of radiologists. Materials and Methods For this retrospective study, 95 questions from Case of the Day at the RSNA 2024 Annual Meeting were collected. Seventy-six questions from the 2023 meeting were included as a baseline for comparison. The test accuracies of prominent multimodal LLMs (including OpenAI's ChatGPT, Google's Gemini, and Meta's open-source Llama 3.2 models) were evaluated and compared with each other and with the accuracies of two senior radiologists. McNemar statistical test was used to assess statistical significance. Results The newly released models OpenAI o1 and GPT-4o achieved scores on the questions from 2024 of 59% (56 of 95; 95% CI: 48, 69) and 54% (51 of 95; 95% CI: 43, 64), respectively, whereas Gemini 1.5 Pro (Google) achieved a score of 36% (34 of 95; 95% CI: 26, 46), and Llama 3.2-90B-Vision (Meta) achieved a score of 33% (31 of 95; 95% CI: 23, 43). For the questions from 2023, OpenAI o1 and GPT-4o scored 62% (47 of 76; 95% CI: 50, 73) and 54% (41 of 76; 95% CI: 42, 65), respectively. GPT-4 (from 2023), the only publicly available vision-language model from OpenAI last year, achieved 43% (33 of 76; 95% CI: 32, 55). The accuracy of OpenAI o1 on the 2024 questions (59%) was comparable to two radiologists, who scored 58% (55 of 95; 95% CI: 47, 68; = .99) and 66% (63 of 95; 95% CI: 56, 76; = .99). Conclusion In 1 year, multimodal LLMs demonstrated substantial advancements, with latest models from OpenAI outperforming those from Google and Meta. Notably, there was no evidence of a statistically significant difference between the accuracy of OpenAI o1 and the accuracies of two expert radiologists. © RSNA, 2025 See also the editorial by Suh and Suh in this issue.
背景 随着多模态大语言模型(LLMs)的使用日益增加,众多具备视觉功能的模型已被开发并向公众开放。目的 通过在1年的时间里对图像和文本内容进行检查,评估和量化多模态LLMs在解读放射学问答病例方面的进展,并将模型性能与放射科医生的性能进行比较。材料与方法 对于这项回顾性研究,收集了2024年RSNA年会每日病例中的95个问题。2023年会议的76个问题作为比较基线被纳入。评估了著名多模态LLMs(包括OpenAI的ChatGPT、谷歌的Gemini和Meta的开源Llama 3.2模型)的测试准确率,并相互比较,同时与两位资深放射科医生的准确率进行比较。采用McNemar统计检验来评估统计学意义。结果 新发布的模型OpenAI o1和GPT-4o在2024年问题上的得分分别为59%(95题中答对56题;95%置信区间:48,69)和54%(95题中答对51题;95%置信区间:43,64),而Gemini 1.5 Pro(谷歌)的得分为36%(95题中答对34题;95%置信区间:26,46),Llama 3.2-90B-Vision(Meta)的得分为33%(95题中答对31题;95%置信区间:23,43)。对于2023年的问题,OpenAI o1和GPT-4o的得分分别为62%(76题中答对47题;95%置信区间:50,73)和54%(76题中答对41题;95%置信区间:42,65)。GPT-4(2023年)是OpenAI去年唯一公开可用的视觉语言模型,其准确率为43%(76题中答对33题;95%置信区间:32,55)。OpenAI o1在2024年问题上(59%)的准确率与两位放射科医生相当,他们的准确率分别为58%(95题中答对55题;95%置信区间:47,68;P = 0.99)和66%(95题中答对63题;95%置信区间:56,76;P = 0.99)。结论 在1年的时间里,多模态LLMs取得了显著进展,OpenAI的最新模型优于谷歌和Meta的模型。值得注意的是,没有证据表明OpenAI o1的准确率与两位专家放射科医生的准确率之间存在统计学上的显著差异。©RSNA,2025 另见本期Suh和Suh的社论。