Goorman Elissa, Mittal Sukul, Choi Jennifer N
Department of Dermatology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.
Jacobs School of Medicine and Biomedical Sciences, Buffalo, NY, USA.
J Cancer Educ. 2025 Jul 1. doi: 10.1007/s13187-025-02683-2.
Effective communication is essential for promoting appropriate skin cancer screening for the public. This study compares the readability of online resources and ChatGPT-generated responses related to the topic of skin cancer screening. We analyzed 60 websites and responses to five questions from ChatGPT-4.0 using five readability metrics: the Flesch-Kincaid Reading Ease, Flesch-Kincaid Grade Level, SMOG Index, Gunning Fog Index, and Coleman-Liau Index. Results showed that both websites and ChatGPT responses exceeded the recommended sixth grade reading level for health-related information. No significant differences were found between the readability for university-hosted versus non-university-hosted websites. However, across all readability metrics, ChatGPT responses were significantly more difficult to read. These findings highlight the need to enhance the accessibility of health information by aligning content with recommended literacy levels. Future efforts should focus on developing patient-centered, publicly accessible materials and refining AI-generated content to improve public understanding and encourage proactive engagement in skin cancer screenings.
有效的沟通对于促进公众进行适当的皮肤癌筛查至关重要。本研究比较了与皮肤癌筛查主题相关的在线资源和ChatGPT生成的回复的可读性。我们使用五种可读性指标分析了60个网站以及ChatGPT-4.0对五个问题的回复:弗莱什-金凯德易读性、弗莱什-金凯德年级水平、烟雾指数、冈宁雾指数和科尔曼-廖指数。结果表明,网站和ChatGPT的回复都超过了健康相关信息建议的六年级阅读水平。在大学主办的网站和非大学主办的网站的可读性之间未发现显著差异。然而,在所有可读性指标中,ChatGPT的回复明显更难阅读。这些发现凸显了通过使内容与建议的读写水平保持一致来提高健康信息可及性的必要性。未来的工作应侧重于开发以患者为中心、公众可获取的材料,并完善人工智能生成的内容,以增进公众理解并鼓励积极参与皮肤癌筛查。