Suppr超能文献

口腔黏膜炎的患者教育资源:谷歌搜索与ChatGPT分析

Patient education resources for oral mucositis: a google search and ChatGPT analysis.

作者信息

Hunter Nathaniel, Allen David, Xiao Daniel, Cox Madisyn, Jain Kunal

机构信息

McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, USA.

Department of Otorhinolaryngology-Head and Neck Surgery, The University of Texas Health Science Center at Houston, Houston, TX, USA.

出版信息

Eur Arch Otorhinolaryngol. 2025 Mar;282(3):1609-1618. doi: 10.1007/s00405-024-08913-5. Epub 2024 Aug 28.

Abstract

PURPOSE

Oral mucositis affects 90% of patients receiving chemotherapy or radiation for head and neck malignancies. Many patients use the internet to learn about their condition and treatments; however, the quality of online resources is not guaranteed. Our objective was to determine the most common Google searches related to "oral mucositis" and assess the quality and readability of available resources compared to ChatGPT-generated responses.

METHODS

Data related to Google searches for "oral mucositis" were analyzed. People Also Ask (PAA) questions (generated by Google) related to searches for "oral mucositis" were documented. Google resources were rated on quality, understandability, ease of reading, and reading grade level using the Journal of the American Medical Association benchmark criteria, Patient Education Materials Assessment Tool, Flesch Reading Ease Score, and Flesh-Kincaid Grade Level, respectively. ChatGPT-generated responses to the most popular PAA questions were rated using identical metrics.

RESULTS

Google search popularity for "oral mucositis" has significantly increased since 2004. 78% of the Google resources answered the associated PAA question, and 6% met the criteria for universal readability. 100% of the ChatGPT-generated responses answered the prompt, and 20% met the criteria for universal readability when asked to write for the appropriate audience.

CONCLUSION

Most resources provided by Google do not meet the criteria for universal readability. When prompted specifically, ChatGPT-generated responses were consistently more readable than Google resources. After verification of accuracy by healthcare professionals, ChatGPT could be a reasonable alternative to generate universally readable patient education resources.

摘要

目的

口腔黏膜炎影响90%接受头颈部恶性肿瘤化疗或放疗的患者。许多患者通过互联网了解自己的病情和治疗方法;然而,在线资源的质量无法保证。我们的目标是确定与“口腔黏膜炎”相关的最常见谷歌搜索内容,并与ChatGPT生成的回复相比,评估现有资源的质量和可读性。

方法

分析与谷歌搜索“口腔黏膜炎”相关的数据。记录与“口腔黏膜炎”搜索相关的“人们也问”(PAA)问题(由谷歌生成)。分别使用美国医学会基准标准、患者教育材料评估工具、弗莱什易读性分数和弗莱什-金凯德年级水平,对谷歌资源的质量、易懂性(理解度)、阅读难易程度和阅读年级水平进行评级。使用相同的指标对ChatGPT针对最热门PAA问题生成的回复进行评级。

结果

自2004年以来,谷歌上“口腔黏膜炎”的搜索热度显著上升。78%的谷歌资源回答了相关的PAA问题,6%符合通用可读性标准。100%的ChatGPT生成的回复回答了提示问题,当被要求为合适的受众写作时,20%符合通用可读性标准。

结论

谷歌提供的大多数资源不符合通用可读性标准。当被特别提示时,ChatGPT生成的回复始终比谷歌资源更具可读性。经医疗专业人员核实准确性后,ChatGPT可能是生成通用可读性患者教育资源的合理替代方案。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验