• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探讨 ChatGPT 创建高质量的肾移植患者教育资源的能力。

Exploring the ability of ChatGPT to create quality patient education resources about kidney transplant.

机构信息

College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, Canada.

Division of Nephrology, Department of Pediatrics, University of British Columbia, Vancouver, Canada.

出版信息

Patient Educ Couns. 2024 Dec;129:108400. doi: 10.1016/j.pec.2024.108400. Epub 2024 Aug 12.

DOI:10.1016/j.pec.2024.108400
PMID:39232336
Abstract

BACKGROUND

Chat Generative Pre-trained Transformer (ChatGPT) is a language model that may have the potential to revolutionize health care. The study purpose was to test whether ChatGPT could be used to create educational brochures about kidney transplant tailored for three target audiences: caregivers, teens and children.

METHODS

Using a list of 25 educational topics, standardized prompts were employed to ensure content consistency in ChatGPT generation. An expert panel assessed the accuracy of the content by rating agreement on a Likert scale (1 = <25 % agreement; and 5 = 100 % agreement). The understandability, actionability and readability of the brochures were assessed using the Patient Education Materials Assessment Tool for printable materials (PEMAT-P) and standard readability scales. A caregiver and patient reviewed and provided written feedback.

RESULTS

We found mean understandability scores of 69 %, 66 %, and 73 % for caregiver, teen, and child brochures respectively, with 90.7 % of the ChatGPT generated brochures scoring 40 % on the actionability scale. Generated caregiver and teen materials achieved readability levels of grades 9-14, while child-specific brochures achieved readability levels of grades 6-11. Brochures were formatted appropriately but lacked depth.

CONCLUSION

ChatGPT demonstrates potential for rapidly generating patient education materials; however, challenges remain in ensuring content specificity. We share the lessons learned to assist other healthcare providers with using this technology.

摘要

背景

聊天生成式预训练转换器(ChatGPT)是一种语言模型,可能具有彻底改变医疗保健的潜力。本研究旨在测试 ChatGPT 是否可用于为三个目标受众(照顾者、青少年和儿童)创建有关肾移植的教育手册。

方法

使用包含 25 个教育主题的清单,采用标准化提示来确保 ChatGPT 生成内容的一致性。专家小组通过李克特量表(1=<25%的一致性;5=100%的一致性)评估内容的准确性。使用可打印材料的患者教育材料评估工具(PEMAT-P)和标准可读性量表评估手册的可理解性、可操作性和可读性。照顾者和患者进行了审查并提供了书面反馈。

结果

我们发现照顾者、青少年和儿童手册的平均可理解性得分分别为 69%、66%和 73%,90.7%的 ChatGPT 生成手册在可操作性量表上得分为 40%。生成的照顾者和青少年材料的可读性水平为 9-14 年级,而特定于儿童的手册的可读性水平为 6-11 年级。手册的格式适当,但内容深度不足。

结论

ChatGPT 展示了快速生成患者教育材料的潜力;但是,在确保内容特异性方面仍然存在挑战。我们分享所学到的经验教训,以帮助其他医疗保健提供者使用这项技术。

相似文献

1
Exploring the ability of ChatGPT to create quality patient education resources about kidney transplant.探讨 ChatGPT 创建高质量的肾移植患者教育资源的能力。
Patient Educ Couns. 2024 Dec;129:108400. doi: 10.1016/j.pec.2024.108400. Epub 2024 Aug 12.
2
Using Large Language Models to Generate Educational Materials on Childhood Glaucoma.利用大语言模型生成儿童青光眼教育材料。
Am J Ophthalmol. 2024 Sep;265:28-38. doi: 10.1016/j.ajo.2024.04.004. Epub 2024 Apr 16.
3
The readability of American Academy of Pediatrics patient education brochures.美国儿科学会患者教育手册的可读性。
J Pediatr Health Care. 2005 May-Jun;19(3):151-6. doi: 10.1016/j.pedhc.2005.01.013.
4
Optimizing Readability of Patient-Facing Hand Surgery Education Materials Using Chat Generative Pretrained Transformer (ChatGPT) 3.5.使用聊天生成预训练转换器(ChatGPT)3.5 优化面向患者的手部手术教育材料的可读性。
J Hand Surg Am. 2024 Oct;49(10):986-991. doi: 10.1016/j.jhsa.2024.05.007. Epub 2024 Jul 6.
5
Development of the Patient Education Materials Assessment Tool (PEMAT): a new measure of understandability and actionability for print and audiovisual patient information.患者教育材料评估工具(PEMAT)的开发:一种针对印刷和视听患者信息的可理解性和可操作性的新测量方法。
Patient Educ Couns. 2014 Sep;96(3):395-403. doi: 10.1016/j.pec.2014.05.027. Epub 2014 Jun 12.
6
Comparison of Patient Education Materials Generated by Chat Generative Pre-Trained Transformer Versus Experts: An Innovative Way to Increase Readability of Patient Education Materials.比较 ChatGPT 生成的患者教育材料与专家生成的材料:一种提高患者教育材料可读性的创新方法。
Ann Plast Surg. 2023 Oct 1;91(4):409-412. doi: 10.1097/SAP.0000000000003634.
7
Consulting Dr. Google: Quality of Online Resources About Tympanostomy Tube Placement.向谷歌医生咨询:关于鼓膜置管术的在线资源质量
Laryngoscope. 2018 Feb;128(2):496-501. doi: 10.1002/lary.26824. Epub 2017 Aug 26.
8
Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures.通过聊天机器人生成的材料优化眼科患者教育:人工智能生成的患者教育材料和美国眼科整形重建外科学会患者手册的可读性分析。
Ophthalmic Plast Reconstr Surg. 2024;40(2):212-216. doi: 10.1097/IOP.0000000000002549. Epub 2023 Nov 16.
9
ChatGPT vs. web search for patient questions: what does ChatGPT do better?ChatGPT 与网页搜索在解答患者问题上的对比:ChatGPT 有哪些优势?
Eur Arch Otorhinolaryngol. 2024 Jun;281(6):3219-3225. doi: 10.1007/s00405-024-08524-0. Epub 2024 Feb 28.
10
Interrater reliability of the Patient Education Materials Assessment Tool (PEMAT).患者教育材料评估工具(PEMAT)的评分者间信度。
Patient Educ Couns. 2018 Mar;101(3):490-496. doi: 10.1016/j.pec.2017.09.003. Epub 2017 Sep 6.

引用本文的文献

1
Leveraging ChatGPT to strengthen pediatric healthcare systems: a systematic review.利用ChatGPT加强儿科医疗系统:一项系统综述
Eur J Pediatr. 2025 Jul 12;184(8):478. doi: 10.1007/s00431-025-06320-4.
2
Performance of Artificial Intelligence Chatbots on Ultrasound Examinations: Cross-Sectional Comparative Analysis.人工智能聊天机器人在超声检查中的表现:横断面比较分析。
JMIR Med Inform. 2025 Jan 9;13:e63924. doi: 10.2196/63924.