文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

解读医学行话:人工智能语言模型(ChatGPT-4、BARD、microsoft copilot)在放射科报告中的应用。

Decoding medical jargon: The use of AI language models (ChatGPT-4, BARD, microsoft copilot) in radiology reports.

机构信息

Department of Radiology, King's College Hospital London, Dubai, United Arab Emirates.

Department of Radiology, Eskişehir Osmangazi University, Eskişehir, Turkiye; Department of Medical Education, Gazi University, Ankara, Turkiye.

出版信息

Patient Educ Couns. 2024 Sep;126:108307. doi: 10.1016/j.pec.2024.108307. Epub 2024 May 3.


DOI:10.1016/j.pec.2024.108307
PMID:38743965
Abstract

OBJECTIVE: Evaluate Artificial Intelligence (AI) language models (ChatGPT-4, BARD, Microsoft Copilot) in simplifying radiology reports, assessing readability, understandability, actionability, and urgency classification. METHODS: This study evaluated the effectiveness of these AI models in translating radiology reports into patient-friendly language and providing understandable and actionable suggestions and urgency classifications. Thirty radiology reports were processed using AI tools, and their outputs were assessed for readability (Flesch Reading Ease, Flesch-Kincaid Grade Level), understandability (PEMAT), and the accuracy of urgency classification. ANOVA and Chi-Square tests were performed to compare the models' performances. RESULTS: All three AI models successfully transformed medical jargon into more accessible language, with BARD showing superior readability scores. In terms of understandability, all models achieved scores above 70%, with ChatGPT-4 and BARD leading (p < 0.001, both). However, the AI models varied in accuracy of urgency recommendations, with no significant statistical difference (p = 0.284). CONCLUSION: AI language models have proven effective in simplifying radiology reports, thereby potentially improving patient comprehension and engagement in their health decisions. However, their accuracy in assessing the urgency of medical conditions based on radiology reports suggests a need for further refinement. PRACTICE IMPLICATIONS: Incorporating AI in radiology communication can empower patients, but further development is crucial for comprehensive and actionable patient support.

摘要

目的:评估人工智能(AI)语言模型(ChatGPT-4、BARD、Microsoft Copilot)在简化放射科报告、评估可读性、可理解性、可操作性和紧急情况分类方面的效果。

方法:本研究评估了这些 AI 模型将放射科报告转化为患者友好语言的有效性,并提供可理解和可操作的建议以及紧急情况分类。使用 AI 工具处理了 30 份放射科报告,并评估其输出的可读性(Flesch 阅读容易度、Flesch-Kincaid 年级水平)、可理解性(PEMAT)和紧急情况分类的准确性。进行了方差分析和卡方检验来比较模型的性能。

结果:所有三种 AI 模型都成功地将医学术语转化为更易懂的语言,BARD 的可读性得分更高。在可理解性方面,所有模型的得分均高于 70%,ChatGPT-4 和 BARD 领先(p<0.001,均)。然而,AI 模型在紧急情况推荐的准确性方面存在差异,无统计学显著差异(p=0.284)。

结论:AI 语言模型已被证明可有效简化放射科报告,从而有可能提高患者对其健康决策的理解和参与度。然而,它们根据放射科报告评估医疗状况紧急程度的准确性表明需要进一步改进。

实践意义:将 AI 纳入放射科沟通中可以增强患者的能力,但需要进一步开发,以提供全面且可操作的患者支持。

相似文献

[1]
Decoding medical jargon: The use of AI language models (ChatGPT-4, BARD, microsoft copilot) in radiology reports.

Patient Educ Couns. 2024-9

[2]
Chatbots talk Strabismus: Can AI become the new patient Educator?

Int J Med Inform. 2024-11

[3]
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.

Medicine (Baltimore). 2024-8-16

[4]
From jargon to clarity: Improving the readability of foot and ankle radiology reports with an artificial intelligence large language model.

Foot Ankle Surg. 2024-6

[5]
Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis.

Surg Endosc. 2024-5

[6]
Assessing the Responses of Large Language Models (ChatGPT-4, Gemini, and Microsoft Copilot) to Frequently Asked Questions in Breast Imaging: A Study on Readability and Accuracy.

Cureus. 2024-5-9

[7]
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.

J Med Internet Res. 2024-8-14

[8]
Assessing the Readability of Patient Education Materials on Cardiac Catheterization From Artificial Intelligence Chatbots: An Observational Cross-Sectional Study.

Cureus. 2024-7-4

[9]
Investigating the impact of innovative AI chatbot on post-pandemic medical education and clinical assistance: a comprehensive analysis.

ANZ J Surg. 2024-2

[10]
From technical to understandable: Artificial Intelligence Large Language Models improve the readability of knee radiology reports.

Knee Surg Sports Traumatol Arthrosc. 2024-5

引用本文的文献

[1]
Development, optimization, and preliminary evaluation of a novel artificial intelligence tool to promote patient health literacy in radiology reports: The Rads-Lit tool.

PLoS One. 2025-9-3

[2]
Comparison of the readability of ChatGPT and Bard in medical communication: a meta-analysis.

BMC Med Inform Decis Mak. 2025-9-1

[3]
Evaluating the Quality and Understandability of Radiology Report Summaries Generated by ChatGPT: Survey Study.

JMIR Form Res. 2025-8-27

[4]
Chatbots in Radiology: Current Applications, Limitations and Future Directions of ChatGPT in Medical Imaging.

Diagnostics (Basel). 2025-6-26

[5]
Leveraging artificial intelligence chatbots for anemia prevention: A comparative study of ChatGPT-3.5, copilot, and Gemini outputs against Google Search results.

PEC Innov. 2025-4-1

[6]
Performance of artificial intelligence chatbots in responding to the frequently asked questions of patients regarding dental prostheses.

BMC Oral Health. 2025-4-15

[7]
Assessing the performance of large language models (GPT-3.5 and GPT-4) and accurate clinical information for pediatric nephrology.

Pediatr Nephrol. 2025-3-5

[8]
Enhancing Patient Education on Cardiovascular Rehabilitation with Large Language Models.

Mo Med. 2025

[9]
ChatGPT-4 Omni's superiority in answering multiple-choice oral radiology questions.

BMC Oral Health. 2025-2-1

[10]
Tailoring glaucoma education using large language models: Addressing health disparities in patient comprehension.

Medicine (Baltimore). 2025-1-10

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索