Suppr超能文献

评估ChatGPT、谷歌巴德和微软必应解决放射学病例 vignettes的能力。

Assessing the Capability of ChatGPT, Google Bard, and Microsoft Bing in Solving Radiology Case Vignettes.

作者信息

Sarangi Pradosh Kumar, Narayan Ravi Kant, Mohakud Sudipta, Vats Aditi, Sahani Debabrata, Mondal Himel

机构信息

Department of Radiodiagnosis, All India Institute of Medical Sciences, Deoghar, Jharkhand, India.

Department of Anatomy, ESIC Medical College & Hospital, Bihta, Patna, Bihar, India.

出版信息

Indian J Radiol Imaging. 2023 Dec 29;34(2):276-282. doi: 10.1055/s-0043-1777746. eCollection 2024 Apr.

Abstract

The field of radiology relies on accurate interpretation of medical images for effective diagnosis and patient care. Recent advancements in artificial intelligence (AI) and natural language processing have sparked interest in exploring the potential of AI models in assisting radiologists. However, limited research has been conducted to assess the performance of AI models in radiology case interpretation, particularly in comparison to human experts.  This study aimed to evaluate the performance of ChatGPT, Google Bard, and Bing in solving radiology case vignettes (Fellowship of the Royal College of Radiologists 2A [FRCR2A] examination style questions) by comparing their responses to those provided by two radiology residents.  A total of 120 multiple-choice questions based on radiology case vignettes were formulated according to the pattern of FRCR2A examination. The questions were presented to ChatGPT, Google Bard, and Bing. Two residents wrote the examination with the same questions in 3 hours. The responses generated by the AI models were collected and compared to the answer keys and explanation of the answers was rated by the two radiologists. A cutoff of 60% was set as the passing score.  The two residents (63.33 and 57.5%) outperformed the three AI models: Bard (44.17%), Bing (53.33%), and ChatGPT (45%), but only one resident passed the examination. The response patterns among the five respondents were significantly different (  = 0.0117). In addition, the agreement among the generative AI models was significant (intraclass correlation coefficient [ICC] = 0.628), but there was no agreement between the residents (Kappa = -0.376). The explanation of generative AI models in support of answer was 44.72% accurate.  Humans exhibited superior accuracy compared to the AI models, showcasing a stronger comprehension of the subject matter. All three AI models included in the study could not achieve the minimum percentage needed to pass an FRCR2A examination. However, generative AI models showed significant agreement in their answers where the residents exhibited low agreement, highlighting a lack of consistency in their responses.

摘要

放射学领域依靠对医学图像的准确解读来进行有效的诊断和患者护理。人工智能(AI)和自然语言处理的最新进展引发了人们对探索AI模型协助放射科医生的潜力的兴趣。然而,为评估AI模型在放射学病例解读中的表现,尤其是与人类专家进行比较的研究还很有限。 本研究旨在通过比较ChatGPT、谷歌巴德(Google Bard)和必应(Bing)对放射学病例 vignettes(皇家放射科医学院奖学金2A [FRCR2A] 考试风格问题)的回答与两名放射科住院医师的回答,来评估它们在解决这些问题方面的表现。 根据FRCR2A考试模式,共制定了120道基于放射学病例vignettes的多项选择题。这些问题被呈现给ChatGPT、谷歌巴德和必应。两名住院医师在3小时内用相同的问题进行考试。收集AI模型生成的回答,并与答案密钥进行比较,两名放射科医生对答案的解释进行评分。设定60%的分数为及格分数。 两名住院医师(分别为63.33%和57.5%)的表现优于三个AI模型:巴德(44.17%)、必应(53.33%)和ChatGPT(45%),但只有一名住院医师通过了考试。五位受访者的回答模式存在显著差异( = 0.0117)。此外,生成式AI模型之间的一致性显著(组内相关系数 [ICC] = 0.628),但住院医师之间没有一致性(卡帕值 = -0.376)。生成式AI模型支持答案的解释准确率为44.72%。 与AI模型相比,人类表现出更高的准确率,表明对主题有更强的理解。该研究中包含的所有三个AI模型都未能达到通过FRCR2A考试所需的最低百分比。然而,生成式AI模型在答案上表现出显著的一致性,而住院医师之间的一致性较低,这突出了他们回答中缺乏一致性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe45/10972658/8bdb3f5a48a9/10-1055-s-0043-1777746-i2392963-1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验