• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

生成式人工智能能否提高放射科实践中患者教育材料的可读性?

Can generative AI improve the readability of patient education materials at a radiology practice?

机构信息

The University of Texas at Austin, Dell Medical School, Department of Diagnostic Medicine, Austin, TX, USA.

The University of Texas at Austin, Austin, TX, USA.

出版信息

Clin Radiol. 2024 Nov;79(11):e1366-e1371. doi: 10.1016/j.crad.2024.08.019. Epub 2024 Aug 22.

DOI:10.1016/j.crad.2024.08.019
PMID:39266371
Abstract

AIM

This study evaluated the readability of existing patient education materials and explored the potential of generative AI tools, such as ChatGPT-4 and Google Gemini, to simplify these materials to a sixth-grade reading level, in accordance with guidelines.

MATERIALS AND METHODS

Seven patient education documents were selected from a major radiology group. ChatGPT-4 and Gemini were provided the documents and asked to reformulate to target a sixth-grade reading level. Average reading level (ARL) and proportional word count (PWC) change were calculated, and a 1-sample t-test was conducted (p=0.05). Three radiologists assessed the materials on a Likert scale for appropriateness, relevance, clarity, and information retention.

RESULTS

The original materials had an ARL of 11.72. ChatGPT ARL was 7.32 ± 0.76 (6/7 significant) and Gemini ARL was 6.55 ± 0.51 (7/7 significant). ChatGPT reduced word count by 15% ± 7%, with 95% retaining at least 75% of information. Gemini reduced word count by 33% ± 7%, with 68% retaining at least 75% of information. ChatGPT outputs were more appropriate (95% vs. 57%), clear (92% vs. 67%), and relevant (95% vs. 76%) than Gemini. Interrater agreement was significantly different for ChatGPT (0.91) than for Gemini (0.46).

CONCLUSION

Generative AI significantly enhances the readability of patient education materials, which did not achieve the recommended sixth-grade ARL. Radiologist evaluations confirmed the appropriateness and relevance of the AI-simplified texts. This study emphasizes the capabilities of generative AI tools and the necessity for ongoing expert review to maintain content accuracy and suitability.

摘要

目的

本研究评估了现有的患者教育材料的可读性,并探索了生成式人工智能工具(如 ChatGPT-4 和 Google Gemini)的潜力,以根据指南将这些材料简化至六年级阅读水平。

材料与方法

从一家主要的放射科集团中选择了 7 份患者教育文件。为 ChatGPT-4 和 Gemini 提供了这些文件,并要求他们重新表述以针对六年级阅读水平。计算了平均阅读水平(ARL)和比例字数(PWC)的变化,并进行了 1 个样本 t 检验(p=0.05)。3 位放射科医生根据适宜性、相关性、清晰度和信息保留情况对材料进行了李克特量表评估。

结果

原始材料的 ARL 为 11.72。ChatGPT 的 ARL 为 7.32±0.76(7/7 有统计学意义),而 Gemini 的 ARL 为 6.55±0.51(7/7 有统计学意义)。ChatGPT 减少了 15%±7%的单词数,保留了 95%的信息,至少 75%的信息得以保留。而 Gemini 减少了 33%±7%的单词数,保留了 68%的信息,至少 75%的信息得以保留。与 Gemini 相比,ChatGPT 的输出更适宜(95%对 57%),更清晰(92%对 67%),更相关(95%对 76%)。ChatGPT 的组内一致性显著高于 Gemini(0.91 对 0.46)。

结论

生成式人工智能显著提高了患者教育材料的可读性,但仍未达到建议的六年级 ARL。放射科医生的评估证实了 AI 简化文本的适宜性和相关性。本研究强调了生成式人工智能工具的能力,以及为了保持内容的准确性和适宜性,需要进行持续的专家审查。

相似文献

1
Can generative AI improve the readability of patient education materials at a radiology practice?生成式人工智能能否提高放射科实践中患者教育材料的可读性?
Clin Radiol. 2024 Nov;79(11):e1366-e1371. doi: 10.1016/j.crad.2024.08.019. Epub 2024 Aug 22.
2
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.评估 ChatGPT®、BARD®、 Gemini®、Copilot®、Perplexity® 在姑息治疗方面的可读性、可靠性和质量。
Medicine (Baltimore). 2024 Aug 16;103(33):e39305. doi: 10.1097/MD.0000000000039305.
3
Optimizing Readability of Patient-Facing Hand Surgery Education Materials Using Chat Generative Pretrained Transformer (ChatGPT) 3.5.使用聊天生成预训练转换器(ChatGPT)3.5 优化面向患者的手部手术教育材料的可读性。
J Hand Surg Am. 2024 Oct;49(10):986-991. doi: 10.1016/j.jhsa.2024.05.007. Epub 2024 Jul 6.
4
Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis.谷歌博士对 ChatGPT 博士:评估人工智能生成的关于阑尾炎的医学信息的内容和质量。
Surg Endosc. 2024 May;38(5):2887-2893. doi: 10.1007/s00464-024-10739-5. Epub 2024 Mar 5.
5
Assessing the Readability of Patient Education Materials on Cardiac Catheterization From Artificial Intelligence Chatbots: An Observational Cross-Sectional Study.评估人工智能聊天机器人提供的心脏导管插入术患者教育材料的可读性:一项观察性横断面研究。
Cureus. 2024 Jul 4;16(7):e63865. doi: 10.7759/cureus.63865. eCollection 2024 Jul.
6
Enhancing readability of USFDA patient communications through large language models: a proof-of-concept study.通过大型语言模型提高美国 FDA 患者通讯的可读性:概念验证研究。
Expert Rev Clin Pharmacol. 2024 Aug;17(8):731-741. doi: 10.1080/17512433.2024.2363840. Epub 2024 Jun 4.
7
Empowering patients: how accurate and readable are large language models in renal cancer education.赋能患者:大语言模型在肾癌教育中的准确性和可读性如何。
Front Oncol. 2024 Sep 26;14:1457516. doi: 10.3389/fonc.2024.1457516. eCollection 2024.
8
Dr. Google vs. Dr. ChatGPT: Exploring the Use of Artificial Intelligence in Ophthalmology by Comparing the Accuracy, Safety, and Readability of Responses to Frequently Asked Patient Questions Regarding Cataracts and Cataract Surgery.谷歌医生与ChatGPT医生:通过比较关于白内障及白内障手术的常见患者问题的回答的准确性、安全性和可读性,探索人工智能在眼科领域的应用。
Semin Ophthalmol. 2024 Aug;39(6):472-479. doi: 10.1080/08820538.2024.2326058. Epub 2024 Mar 22.
9
Can artificial intelligence models serve as patient information consultants in orthodontics?人工智能模型能否在正畸学中充当患者信息顾问?
BMC Med Inform Decis Mak. 2024 Jul 29;24(1):211. doi: 10.1186/s12911-024-02619-8.
10
Evaluation of the Current Status of Artificial Intelligence for Endourology Patient Education: A Blind Comparison of ChatGPT and Google Bard Against Traditional Information Resources.评估人工智能在泌尿内镜患者教育中的现状:ChatGPT 和 Google Bard 与传统信息资源的盲对比。
J Endourol. 2024 Aug;38(8):843-851. doi: 10.1089/end.2023.0696. Epub 2024 May 17.

引用本文的文献

1
Chatbots in Radiology: Current Applications, Limitations and Future Directions of ChatGPT in Medical Imaging.放射学中的聊天机器人:ChatGPT在医学成像中的当前应用、局限性及未来方向
Diagnostics (Basel). 2025 Jun 26;15(13):1635. doi: 10.3390/diagnostics15131635.
2
Evaluating Quality and Readability of AI-generated Information on Living Kidney Donation.评估关于活体肾捐赠的人工智能生成信息的质量和可读性。
Transplant Direct. 2024 Dec 10;11(1):e1740. doi: 10.1097/TXD.0000000000001740. eCollection 2025 Jan.