• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

评估皮肤癌筛查资源的可读性:在线网站与ChatGPT回复的比较

Assessing Readability of Skin Cancer Screening Resources: A Comparison of Online Websites and ChatGPT Responses.

作者信息

Goorman Elissa, Mittal Sukul, Choi Jennifer N

机构信息

Department of Dermatology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.

Jacobs School of Medicine and Biomedical Sciences, Buffalo, NY, USA.

出版信息

J Cancer Educ. 2025 Jul 1. doi: 10.1007/s13187-025-02683-2.

DOI:10.1007/s13187-025-02683-2
PMID:40591107
Abstract

Effective communication is essential for promoting appropriate skin cancer screening for the public. This study compares the readability of online resources and ChatGPT-generated responses related to the topic of skin cancer screening. We analyzed 60 websites and responses to five questions from ChatGPT-4.0 using five readability metrics: the Flesch-Kincaid Reading Ease, Flesch-Kincaid Grade Level, SMOG Index, Gunning Fog Index, and Coleman-Liau Index. Results showed that both websites and ChatGPT responses exceeded the recommended sixth grade reading level for health-related information. No significant differences were found between the readability for university-hosted versus non-university-hosted websites. However, across all readability metrics, ChatGPT responses were significantly more difficult to read. These findings highlight the need to enhance the accessibility of health information by aligning content with recommended literacy levels. Future efforts should focus on developing patient-centered, publicly accessible materials and refining AI-generated content to improve public understanding and encourage proactive engagement in skin cancer screenings.

摘要

有效的沟通对于促进公众进行适当的皮肤癌筛查至关重要。本研究比较了与皮肤癌筛查主题相关的在线资源和ChatGPT生成的回复的可读性。我们使用五种可读性指标分析了60个网站以及ChatGPT-4.0对五个问题的回复:弗莱什-金凯德易读性、弗莱什-金凯德年级水平、烟雾指数、冈宁雾指数和科尔曼-廖指数。结果表明,网站和ChatGPT的回复都超过了健康相关信息建议的六年级阅读水平。在大学主办的网站和非大学主办的网站的可读性之间未发现显著差异。然而,在所有可读性指标中,ChatGPT的回复明显更难阅读。这些发现凸显了通过使内容与建议的读写水平保持一致来提高健康信息可及性的必要性。未来的工作应侧重于开发以患者为中心、公众可获取的材料,并完善人工智能生成的内容,以增进公众理解并鼓励积极参与皮肤癌筛查。

相似文献

1
Assessing Readability of Skin Cancer Screening Resources: A Comparison of Online Websites and ChatGPT Responses.评估皮肤癌筛查资源的可读性:在线网站与ChatGPT回复的比较
J Cancer Educ. 2025 Jul 1. doi: 10.1007/s13187-025-02683-2.
2
Online and ChatGPT-generated patient education materials regarding brain tumor prognosis fail to meet readability standards.关于脑肿瘤预后的在线及由ChatGPT生成的患者教育材料未达到可读性标准。
J Clin Neurosci. 2025 Aug;138:111410. doi: 10.1016/j.jocn.2025.111410. Epub 2025 Jun 20.
3
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
4
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.外周动脉疾病教育中的人工智能:ChatGPT与谷歌Gemini的较量
Cureus. 2025 Jun 1;17(6):e85174. doi: 10.7759/cureus.85174. eCollection 2025 Jun.
5
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.美国矫形外科医师学会的OrthoInfo提供了比ChatGPT更具可读性的关于肩袖损伤的信息。
J ISAKOS. 2025 Feb 12;12:100841. doi: 10.1016/j.jisako.2025.100841.
6
Bridging Health Literacy Gaps in Spine Care: Using ChatGPT-4o to Improve Patient-Education Materials.弥合脊柱护理中的健康素养差距:利用ChatGPT-4o改进患者教育材料。
J Bone Joint Surg Am. 2025 Jun 19. doi: 10.2106/JBJS.24.01484.
7
Artificial Intelligence Shows Limited Success in Improving Readability Levels of Spanish-language Orthopaedic Patient Education Materials.人工智能在提高西班牙语骨科患者教育材料的可读性方面成效有限。
Clin Orthop Relat Res. 2025 Feb 11. doi: 10.1097/CORR.0000000000003413.
8
Eyes on the Text: Assessing Readability of Artificial Intelligence and Ophthalmologist Responses to Patient Surgery Queries.关注文本:评估人工智能和眼科医生对患者手术疑问的回复的可读性。
Ophthalmologica. 2025;248(3):149-159. doi: 10.1159/000544917. Epub 2025 Mar 10.
9
Currently Available Large Language Models Are Moderately Effective in Improving Readability of English and Spanish Patient Education Materials in Pediatric Orthopaedics.目前可用的大语言模型在提高儿科骨科英语和西班牙语患者教育材料的可读性方面有一定效果。
J Am Acad Orthop Surg. 2025 Jun 24. doi: 10.5435/JAAOS-D-25-00267.
10
Readability and Quality of Online Information on Osteochondral Knee Injuries: An Objective Assessment.膝关节骨软骨损伤在线信息的可读性与质量:一项客观评估。
Cureus. 2025 May 29;17(5):e85014. doi: 10.7759/cureus.85014. eCollection 2025 May.