College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, Canada.
Division of Nephrology, Department of Pediatrics, University of British Columbia, Vancouver, Canada.
Patient Educ Couns. 2024 Dec;129:108400. doi: 10.1016/j.pec.2024.108400. Epub 2024 Aug 12.
Chat Generative Pre-trained Transformer (ChatGPT) is a language model that may have the potential to revolutionize health care. The study purpose was to test whether ChatGPT could be used to create educational brochures about kidney transplant tailored for three target audiences: caregivers, teens and children.
Using a list of 25 educational topics, standardized prompts were employed to ensure content consistency in ChatGPT generation. An expert panel assessed the accuracy of the content by rating agreement on a Likert scale (1 = <25 % agreement; and 5 = 100 % agreement). The understandability, actionability and readability of the brochures were assessed using the Patient Education Materials Assessment Tool for printable materials (PEMAT-P) and standard readability scales. A caregiver and patient reviewed and provided written feedback.
We found mean understandability scores of 69 %, 66 %, and 73 % for caregiver, teen, and child brochures respectively, with 90.7 % of the ChatGPT generated brochures scoring 40 % on the actionability scale. Generated caregiver and teen materials achieved readability levels of grades 9-14, while child-specific brochures achieved readability levels of grades 6-11. Brochures were formatted appropriately but lacked depth.
ChatGPT demonstrates potential for rapidly generating patient education materials; however, challenges remain in ensuring content specificity. We share the lessons learned to assist other healthcare providers with using this technology.
聊天生成式预训练转换器(ChatGPT)是一种语言模型,可能具有彻底改变医疗保健的潜力。本研究旨在测试 ChatGPT 是否可用于为三个目标受众(照顾者、青少年和儿童)创建有关肾移植的教育手册。
使用包含 25 个教育主题的清单,采用标准化提示来确保 ChatGPT 生成内容的一致性。专家小组通过李克特量表(1=<25%的一致性;5=100%的一致性)评估内容的准确性。使用可打印材料的患者教育材料评估工具(PEMAT-P)和标准可读性量表评估手册的可理解性、可操作性和可读性。照顾者和患者进行了审查并提供了书面反馈。
我们发现照顾者、青少年和儿童手册的平均可理解性得分分别为 69%、66%和 73%,90.7%的 ChatGPT 生成手册在可操作性量表上得分为 40%。生成的照顾者和青少年材料的可读性水平为 9-14 年级,而特定于儿童的手册的可读性水平为 6-11 年级。手册的格式适当,但内容深度不足。
ChatGPT 展示了快速生成患者教育材料的潜力;但是,在确保内容特异性方面仍然存在挑战。我们分享所学到的经验教训,以帮助其他医疗保健提供者使用这项技术。