Suppr超能文献

比较ChatGPT-4生成的与人工生成的全膝关节置换术患者教育材料的质量和可读性。

Comparing the Quality and Readability of ChatGPT-4-Generated vs. Human-Generated Patient Education Materials for Total Knee Arthroplasty.

作者信息

Lower Kirk, Lin Jia Y, Jenkin Deanne, Campbell Chantal L, Seth Ishith, Morris Matthew, Adie Sam

机构信息

Medicine and Surgery, Griffith University, Gold Coast, AUS.

Orthopedic Surgery, Gold Coast Health, Gold Coast, AUS.

出版信息

Cureus. 2025 Jun 21;17(6):e86491. doi: 10.7759/cureus.86491. eCollection 2025 Jun.

Abstract

Background The purpose of this study was to evaluate the potential role of artificial intelligence, specifically ChatGPT-4, in generating patient education materials (PEMs) for total knee arthroplasty (TKA). The aim of our study was to compare the quality and readability of PEMs for TKA generated by ChatGPT-4 with those created by human experts to assess the potential for the use of AI in patient education. Materials and methods We assessed the quality and readability of TKA PEMs produced by ChatGPT-4 and five reputable human-generated websites. Readability was compared using Flesch-Kincaid Grade Level and Flesch Reading Ease. The quality of information was compared using the DISCERN criteria. Results ChatGPT-4 PEMs demonstrated a significantly higher reading grade level and lower reading ease score compared to human-generated PEMs (<0.001). Conclusions The utility of ChatGPT-4 for producing TKA PEMs is promising. Notably, the quality and reliability are as good as human-generated resources. However, it is currently limited by readability issues, leading to a recommendation against its use. Future AI enhancements should prioritize readability to ensure information is more accessible. Effective collaboration between AI developers and healthcare professionals is vital for improving patient education outcomes.

摘要

背景 本研究的目的是评估人工智能,特别是ChatGPT-4,在生成全膝关节置换术(TKA)患者教育材料(PEM)方面的潜在作用。我们研究的目的是比较ChatGPT-4生成的TKA患者教育材料与人类专家创建的材料的质量和可读性,以评估人工智能在患者教育中的应用潜力。材料与方法 我们评估了ChatGPT-4和五个著名的人工生成网站制作的TKA患者教育材料的质量和可读性。使用弗莱施-金凯德年级水平和弗莱施阅读简易度比较可读性。使用DISCERN标准比较信息质量。结果 与人工生成的患者教育材料相比,ChatGPT-4生成的患者教育材料的阅读年级水平显著更高,阅读简易度得分更低(<0.001)。结论 ChatGPT-4用于制作TKA患者教育材料的效用很有前景。值得注意的是,其质量和可靠性与人工生成的资源相当。然而,它目前受到可读性问题的限制,因此建议不要使用。未来人工智能的改进应优先考虑可读性,以确保信息更易于获取。人工智能开发者与医疗保健专业人员之间的有效合作对于改善患者教育结果至关重要。

相似文献

4
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.

本文引用的文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验