• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

评估ChatGPT、谷歌巴德和微软必应解决放射学病例 vignettes的能力。

Assessing the Capability of ChatGPT, Google Bard, and Microsoft Bing in Solving Radiology Case Vignettes.

作者信息

Sarangi Pradosh Kumar, Narayan Ravi Kant, Mohakud Sudipta, Vats Aditi, Sahani Debabrata, Mondal Himel

机构信息

Department of Radiodiagnosis, All India Institute of Medical Sciences, Deoghar, Jharkhand, India.

Department of Anatomy, ESIC Medical College & Hospital, Bihta, Patna, Bihar, India.

出版信息

Indian J Radiol Imaging. 2023 Dec 29;34(2):276-282. doi: 10.1055/s-0043-1777746. eCollection 2024 Apr.

DOI:10.1055/s-0043-1777746
PMID:38549897
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10972658/
Abstract

The field of radiology relies on accurate interpretation of medical images for effective diagnosis and patient care. Recent advancements in artificial intelligence (AI) and natural language processing have sparked interest in exploring the potential of AI models in assisting radiologists. However, limited research has been conducted to assess the performance of AI models in radiology case interpretation, particularly in comparison to human experts.  This study aimed to evaluate the performance of ChatGPT, Google Bard, and Bing in solving radiology case vignettes (Fellowship of the Royal College of Radiologists 2A [FRCR2A] examination style questions) by comparing their responses to those provided by two radiology residents.  A total of 120 multiple-choice questions based on radiology case vignettes were formulated according to the pattern of FRCR2A examination. The questions were presented to ChatGPT, Google Bard, and Bing. Two residents wrote the examination with the same questions in 3 hours. The responses generated by the AI models were collected and compared to the answer keys and explanation of the answers was rated by the two radiologists. A cutoff of 60% was set as the passing score.  The two residents (63.33 and 57.5%) outperformed the three AI models: Bard (44.17%), Bing (53.33%), and ChatGPT (45%), but only one resident passed the examination. The response patterns among the five respondents were significantly different (  = 0.0117). In addition, the agreement among the generative AI models was significant (intraclass correlation coefficient [ICC] = 0.628), but there was no agreement between the residents (Kappa = -0.376). The explanation of generative AI models in support of answer was 44.72% accurate.  Humans exhibited superior accuracy compared to the AI models, showcasing a stronger comprehension of the subject matter. All three AI models included in the study could not achieve the minimum percentage needed to pass an FRCR2A examination. However, generative AI models showed significant agreement in their answers where the residents exhibited low agreement, highlighting a lack of consistency in their responses.

摘要

放射学领域依靠对医学图像的准确解读来进行有效的诊断和患者护理。人工智能(AI)和自然语言处理的最新进展引发了人们对探索AI模型协助放射科医生的潜力的兴趣。然而,为评估AI模型在放射学病例解读中的表现,尤其是与人类专家进行比较的研究还很有限。 本研究旨在通过比较ChatGPT、谷歌巴德(Google Bard)和必应(Bing)对放射学病例 vignettes(皇家放射科医学院奖学金2A [FRCR2A] 考试风格问题)的回答与两名放射科住院医师的回答,来评估它们在解决这些问题方面的表现。 根据FRCR2A考试模式,共制定了120道基于放射学病例vignettes的多项选择题。这些问题被呈现给ChatGPT、谷歌巴德和必应。两名住院医师在3小时内用相同的问题进行考试。收集AI模型生成的回答,并与答案密钥进行比较,两名放射科医生对答案的解释进行评分。设定60%的分数为及格分数。 两名住院医师(分别为63.33%和57.5%)的表现优于三个AI模型:巴德(44.17%)、必应(53.33%)和ChatGPT(45%),但只有一名住院医师通过了考试。五位受访者的回答模式存在显著差异( = 0.0117)。此外,生成式AI模型之间的一致性显著(组内相关系数 [ICC] = 0.628),但住院医师之间没有一致性(卡帕值 = -0.376)。生成式AI模型支持答案的解释准确率为44.72%。 与AI模型相比,人类表现出更高的准确率,表明对主题有更强的理解。该研究中包含的所有三个AI模型都未能达到通过FRCR2A考试所需的最低百分比。然而,生成式AI模型在答案上表现出显著的一致性,而住院医师之间的一致性较低,这突出了他们回答中缺乏一致性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/8029ed67892b/10-1055-s-0043-1777746-i2392963-4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/8bdb3f5a48a9/10-1055-s-0043-1777746-i2392963-1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/6415125134cf/10-1055-s-0043-1777746-i2392963-2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/359da9b3a469/10-1055-s-0043-1777746-i2392963-3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/8029ed67892b/10-1055-s-0043-1777746-i2392963-4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/8bdb3f5a48a9/10-1055-s-0043-1777746-i2392963-1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/6415125134cf/10-1055-s-0043-1777746-i2392963-2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/359da9b3a469/10-1055-s-0043-1777746-i2392963-3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/8029ed67892b/10-1055-s-0043-1777746-i2392963-4.jpg

相似文献

1
Assessing the Capability of ChatGPT, Google Bard, and Microsoft Bing in Solving Radiology Case Vignettes.评估ChatGPT、谷歌巴德和微软必应解决放射学病例 vignettes的能力。
Indian J Radiol Imaging. 2023 Dec 29;34(2):276-282. doi: 10.1055/s-0043-1777746. eCollection 2024 Apr.
2
Performance of Large Language Models (ChatGPT, Bing Search, and Google Bard) in Solving Case Vignettes in Physiology.大语言模型(ChatGPT、必应搜索和谷歌巴德)在解决生理学病例 vignettes 中的表现。
Cureus. 2023 Aug 4;15(8):e42972. doi: 10.7759/cureus.42972. eCollection 2023 Aug.
3
Large Language Models in Hematology Case Solving: A Comparative Study of ChatGPT-3.5, Google Bard, and Microsoft Bing.大语言模型在血液学病例解决中的应用:ChatGPT-3.5、谷歌巴德和微软必应的比较研究
Cureus. 2023 Aug 21;15(8):e43861. doi: 10.7759/cureus.43861. eCollection 2023 Aug.
4
Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology.分析ChatGPT、Bard和必应在医学生理学中生成基于推理的多项选择题的适用性。
Cureus. 2023 Jun 26;15(6):e40977. doi: 10.7759/cureus.40977. eCollection 2023 Jun.
5
How AI Responds to Common Lung Cancer Questions: ChatGPT vs Google Bard.人工智能如何回答常见肺癌问题:ChatGPT 与 Google Bard 对比。
Radiology. 2023 Jun;307(5):e230922. doi: 10.1148/radiol.230922.
6
Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study.评估生成式 AI 大语言模型 ChatGPT、Google Bard 和 Microsoft Bing Chat 在支持循证牙科方面的性能:比较混合方法研究。
J Med Internet Res. 2023 Dec 28;25:e51580. doi: 10.2196/51580.
7
Evidence-based potential of generative artificial intelligence large language models in orthodontics: a comparative study of ChatGPT, Google Bard, and Microsoft Bing.生成式人工智能大语言模型在正畸学中的循证潜力:ChatGPT、谷歌巴德和微软必应的比较研究
Eur J Orthod. 2024 Apr 13. doi: 10.1093/ejo/cjae017.
8
Assessing the Accuracy of Information on Medication Abortion: A Comparative Analysis of ChatGPT and Google Bard AI.评估药物流产信息的准确性:ChatGPT与谷歌巴德人工智能的比较分析
Cureus. 2024 Jan 2;16(1):e51544. doi: 10.7759/cureus.51544. eCollection 2024 Jan.
9
Pilot Testing of a Tool to Standardize the Assessment of the Quality of Health Information Generated by Artificial Intelligence-Based Models.用于规范基于人工智能模型生成的健康信息质量评估工具的试点测试。
Cureus. 2023 Nov 24;15(11):e49373. doi: 10.7759/cureus.49373. eCollection 2023 Nov.
10
Performance of Artificial Intelligence Chatbots on Glaucoma Questions Adapted From Patient Brochures.人工智能聊天机器人对改编自患者手册的青光眼问题的回答情况。
Cureus. 2024 Mar 23;16(3):e56766. doi: 10.7759/cureus.56766. eCollection 2024 Mar.

引用本文的文献

1
Evaluating large language models as graders of medical short answer questions: a comparative analysis with expert human graders.评估大型语言模型作为医学简答题评分者:与专家人工评分者的比较分析。
Med Educ Online. 2025 Dec;30(1):2550751. doi: 10.1080/10872981.2025.2550751. Epub 2025 Aug 24.
2
Could a New Method of Acromiohumeral Distance Measurement Emerge? Artificial Intelligence vs. Physician.能否出现一种新的肩峰肱骨距离测量方法?人工智能与医生的较量。
J Imaging Inform Med. 2025 Jul 25. doi: 10.1007/s10278-025-01614-3.
3
Accuracy of Large Language Models When Answering Clinical Research Questions: Systematic Review and Network Meta-Analysis.

本文引用的文献

1
Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology.分析ChatGPT、Bard和必应在医学生理学中生成基于推理的多项选择题的适用性。
Cureus. 2023 Jun 26;15(6):e40977. doi: 10.7759/cureus.40977. eCollection 2023 Jun.
2
How AI Responds to Common Lung Cancer Questions: ChatGPT vs Google Bard.人工智能如何回答常见肺癌问题:ChatGPT 与 Google Bard 对比。
Radiology. 2023 Jun;307(5):e230922. doi: 10.1148/radiol.230922.
3
GPT-4 in Radiology: Improvements in Advanced Reasoning.
大型语言模型回答临床研究问题的准确性:系统评价与网络荟萃分析
J Med Internet Res. 2025 Apr 30;27:e64486. doi: 10.2196/64486.
4
Evaluating ChatGPT-4's Performance in Identifying Radiological Anatomy in FRCR Part 1 Examination Questions.评估ChatGPT-4在识别FRCR第一部分考试题目中的放射解剖学方面的表现。
Indian J Radiol Imaging. 2024 Nov 4;35(2):287-294. doi: 10.1055/s-0044-1792040. eCollection 2025 Apr.
5
Comparing Diagnostic Accuracy of Clinical Professionals and Large Language Models: Systematic Review and Meta-Analysis.比较临床专业人员和大语言模型的诊断准确性:系统评价与荟萃分析
JMIR Med Inform. 2025 Apr 25;13:e64963. doi: 10.2196/64963.
6
Artificial intelligence in radiology: navigating innovation and ensuring clinical reliability.放射学中的人工智能:引领创新并确保临床可靠性。
Eur Radiol. 2025 Apr 23. doi: 10.1007/s00330-025-11599-w.
7
Generative pre-trained transformer 4o (GPT-4o) in solving text-based multiple response questions for European Diploma in Radiology (EDiR): a comparative study with radiologists.生成式预训练变换器4o(GPT-4o)用于解答欧洲放射学文凭(EDiR)基于文本的多项选择题:与放射科医生的对比研究
Insights Imaging. 2025 Mar 22;16(1):66. doi: 10.1186/s13244-025-01941-7.
8
Exploring Radiology Postgraduate Students' Engagement with Large Language Models for Educational Purposes: A Study of Knowledge, Attitudes, and Practices.探索放射学研究生出于教育目的对大语言模型的使用情况:一项关于知识、态度和实践的研究
Indian J Radiol Imaging. 2024 Jul 19;35(1):35-42. doi: 10.1055/s-0044-1788605. eCollection 2025 Jan.
9
Revolution or risk?-Assessing the potential and challenges of GPT-4V in radiologic image interpretation.革命还是风险?——评估GPT-4V在放射影像解读中的潜力与挑战
Eur Radiol. 2025 Mar;35(3):1111-1121. doi: 10.1007/s00330-024-11115-6. Epub 2024 Oct 18.
10
Comment on: ChatGPT: Chasing the Storm in Radiology Training and Education.关于《ChatGPT:放射学培训与教育中的风暴追逐》的评论
Indian J Radiol Imaging. 2024 May 3;34(4):792-794. doi: 10.1055/s-0044-1786722. eCollection 2024 Oct.
GPT-4 在放射学中的应用:高级推理能力的提升。
Radiology. 2023 Jun;307(5):e230987. doi: 10.1148/radiol.230987. Epub 2023 May 16.
4
Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations.ChatGPT 在放射科 Board 考试中的表现:当前优势和局限性的深入了解。
Radiology. 2023 Jun;307(5):e230582. doi: 10.1148/radiol.230582. Epub 2023 May 16.
5
How will artificial intelligence transform cardiovascular computed tomography? A conversation with an AI model.人工智能将如何改变心血管计算机断层扫描?与一个人工智能模型的对话。
J Cardiovasc Comput Tomogr. 2023 Jul-Aug;17(4):281-283. doi: 10.1016/j.jcct.2023.03.010. Epub 2023 Apr 7.
6
ChatGPT: Is this version good for healthcare and research?ChatGPT:这个版本对医疗保健和研究有帮助吗?
Diabetes Metab Syndr. 2023 Apr;17(4):102744. doi: 10.1016/j.dsx.2023.102744. Epub 2023 Mar 15.
7
Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology.ChatGPT在协助解决病理学高阶问题中的适用性。
Cureus. 2023 Feb 20;15(2):e35237. doi: 10.7759/cureus.35237. eCollection 2023 Feb.
8
Calibrating a Transformer-Based Model's Confidence on Community-Engaged Research Studies: Decision Support Evaluation Study.校准基于Transformer的模型对社区参与研究的置信度:决策支持评估研究
JMIR Form Res. 2023 Mar 20;7:e41516. doi: 10.2196/41516.
9
Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector.人工智能在医疗保健领域的缺点及其潜在解决方案。
Biomed Mater Devices. 2023 Feb 8:1-8. doi: 10.1007/s44174-023-00063-2.
10
AI support for accurate and fast radiological diagnosis of COVID-19: an international multicenter, multivendor CT study.人工智能支持 COVID-19 的准确快速放射诊断:一项国际多中心、多供应商 CT 研究。
Eur Radiol. 2023 Jun;33(6):4280-4291. doi: 10.1007/s00330-022-09335-9. Epub 2022 Dec 16.