• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

评估阅读公式差异与书面健康信息材料:应用、结果和建议。

Assessing readability formula differences with written health information materials: application, results, and recommendations.

机构信息

College of Pharmacy, University of Oklahoma, 4502 East 41st Street, Tulsa, OK 74135, USA.

出版信息

Res Social Adm Pharm. 2013 Sep-Oct;9(5):503-16. doi: 10.1016/j.sapharm.2012.05.009. Epub 2012 Jul 25.

DOI:10.1016/j.sapharm.2012.05.009
PMID:22835706
Abstract

BACKGROUND

Readability formulas are often used to guide the development and evaluation of literacy-sensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas.

OBJECTIVES

This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use.

METHODS

A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided.

RESULTS

The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability.

CONCLUSIONS

The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation.

摘要

背景

可读性公式常用于指导编写适合阅读水平的卫生信息,并对其进行评估。然而,由于软件处理算法和公式应用方式的不同,可读性公式的结果可能会有很大差异。这些差异使得阅读水平估计的解释变得复杂,尤其是在缺乏统一的应用和解释可读性公式的指导方针的情况下。

目的

本研究旨在:(1) 确定在卫生保健文献中报告的常用可读性公式;(2) 展示最常用的可读性公式在书面卫生信息中的应用;(3) 比较和对比相同书面卫生信息选择应用常见可读性公式的差异;(4) 为书面卫生相关材料选择合适的可读性公式提供建议,以优化其使用。

方法

进行文献检索,以确定卫生保健文献中最常用的可读性公式。随后,将每种识别出的公式应用于 15 个关于抑郁症及其治疗的书面卫生信息示例的单词样本。根据文本样本大小、选择、格式、软件类型和/或手工计算,比较常见可读性公式的可读性估计值。提供了使用建议。

结果

Flesch-Kincaid 公式的使用最为普遍(57.42%)。在相同的文本上,可读性公式的差异可达 5 个阅读水平等级。简单的文字难度测量法 (SMOG) 可读性公式表现最为稳定。根据文本样本大小、选择、格式、软件和/或手工计算,个别可读性公式的估计值差异可达 6 个阅读水平等级。

结论

由于 SMOG 公式结果的一致性、预期理解程度较高、使用更近期的确定阅读水平等级估计的验证标准以及使用的简便性,它似乎最适合用于医疗保健应用。为了提高可读性结果的解释,任何公式的阅读水平等级估计值报告都应附有关于单词样本大小、单词采样在文本中的位置、格式和计算方法的信息。

相似文献

1
Assessing readability formula differences with written health information materials: application, results, and recommendations.评估阅读公式差异与书面健康信息材料:应用、结果和建议。
Res Social Adm Pharm. 2013 Sep-Oct;9(5):503-16. doi: 10.1016/j.sapharm.2012.05.009. Epub 2012 Jul 25.
2
Assessment of online patient education materials from major ophthalmologic associations.主要眼科协会在线患者教育材料评估。
JAMA Ophthalmol. 2015 Apr;133(4):449-54. doi: 10.1001/jamaophthalmol.2014.6104.
3
A readability assessment of online stroke information.在线中风信息的可读性评估。
J Stroke Cerebrovasc Dis. 2014 Jul;23(6):1362-7. doi: 10.1016/j.jstrokecerebrovasdis.2013.11.017. Epub 2014 Jan 3.
4
Readability assessment of online ophthalmic patient information.在线眼科患者信息的可读性评估。
JAMA Ophthalmol. 2013 Dec;131(12):1610-6. doi: 10.1001/jamaophthalmol.2013.5521.
5
Readability assessment of internet-based patient education materials related to uterine artery embolization.基于互联网的子宫动脉栓塞术相关患者教育材料的可读性评估。
J Vasc Interv Radiol. 2013 Apr;24(4):469-74. doi: 10.1016/j.jvir.2013.01.006. Epub 2013 Feb 26.
6
Health literacy and the readability of written information for hormone therapies.健康素养与激素治疗相关书面信息的可读性
J Midwifery Womens Health. 2013 May-Jun;58(3):265-70. doi: 10.1111/jmwh.12036. Epub 2013 Apr 30.
7
Readability of the American Academy of Pediatric Dentistry patient education materials.美国儿科学会牙科患者教育材料的可读性。
Pediatr Dent. 2007 Sep-Oct;29(5):431-5.
8
Readability assessment of internet-based patient education materials related to mammography for breast cancer screening.基于互联网的乳腺癌筛查乳腺X线摄影相关患者教育材料的可读性评估。
Acad Radiol. 2015 Mar;22(3):290-5. doi: 10.1016/j.acra.2014.10.009. Epub 2014 Dec 5.
9
Health literacy and the Internet: a study on the readability of Australian online health information.健康素养与互联网:一项关于澳大利亚在线健康信息可读性的研究。
Aust N Z J Public Health. 2015 Aug;39(4):309-14. doi: 10.1111/1753-6405.12341. Epub 2015 Feb 25.
10
Readability and patient education materials used for low-income populations.面向低收入人群的可读性和患者教育材料。
Clin Nurse Spec. 2009 Jan-Feb;23(1):33-40; quiz 41-2. doi: 10.1097/01.NUR.0000343079.50214.31.

引用本文的文献

1
Comparison of the readability of ChatGPT and Bard in medical communication: a meta-analysis.ChatGPT与Bard在医学交流中的可读性比较:一项荟萃分析。
BMC Med Inform Decis Mak. 2025 Sep 1;25(1):325. doi: 10.1186/s12911-025-03035-2.
2
Early Feedback for the Development of a Novel Brief Colon Cancer Screening Decision Aid for Adults ≥75 years at Risk for Limited Health Literacy: A Pilot Study.针对健康素养有限的75岁及以上有结肠癌筛查风险的成年人开发新型简短结肠癌筛查决策辅助工具的早期反馈:一项试点研究
Cancer Control. 2025 Jan-Dec;32:10732748251372677. doi: 10.1177/10732748251372677. Epub 2025 Aug 28.
3
Updating Health Canada's Heat-Health Messages for the Environment and Climate Change Canada Heat Warning System: A Collaboration with Canadian Experts.
为加拿大卫生部更新环境与气候变化部高温预警系统的高温健康信息:与加拿大专家的合作。
Int J Environ Res Public Health. 2025 Aug 13;22(8):1266. doi: 10.3390/ijerph22081266.
4
Evaluating the Quality and Understandability of Radiology Report Summaries Generated by ChatGPT: Survey Study.评估ChatGPT生成的放射学报告摘要的质量和可理解性:调查研究
JMIR Form Res. 2025 Aug 27;9:e76097. doi: 10.2196/76097.
5
Part C Early Intervention Procedural Safeguard Notices: Are They Supporting Parents to Understand Their Rights?C部分 早期干预程序保障通知:它们是否有助于家长理解自身权利?
Topics Early Child Spec Educ. 2025 Feb;44(4):330-341. doi: 10.1177/02711214241287174. Epub 2024 Oct 29.
6
Comparative Assessment of Large Language Model Outputs and NHS Patient Information in Oral Medicine.口腔医学中大型语言模型输出与英国国家医疗服务体系患者信息的比较评估
Cureus. 2025 Aug 16;17(8):e90242. doi: 10.7759/cureus.90242. eCollection 2025 Aug.
7
Potential of AI Chatbots in Online Hair Transplantation Consultations: A Multi-metric Assessment of Three Models.人工智能聊天机器人在在线植发咨询中的潜力:三种模型的多指标评估
Aesthetic Plast Surg. 2025 Aug 8. doi: 10.1007/s00266-025-05103-4.
8
Patient Educational Materials for Pheochromocytoma Exceed Recommended Readability Level: An Analysis Across Three Independent Reading Instruments.嗜铬细胞瘤患者教育材料超出推荐的可读性水平:基于三种独立阅读工具的分析
J Cancer Educ. 2025 Jun 21. doi: 10.1007/s13187-025-02666-3.
9
AI Chatbots in Pediatric Orthopedics: How Accurate Are Their Answers to Parents' Questions on Bowlegs and Knock Knees?儿科骨科中的人工智能聊天机器人:它们对家长关于膝内翻和膝外翻问题的回答有多准确?
Healthcare (Basel). 2025 May 27;13(11):1271. doi: 10.3390/healthcare13111271.
10
ChatGPT and Google Gemini are Clinically Inadequate in Providing Recommendations on Management of Developmental Dysplasia of the Hip Compared to American Academy of Orthopaedic Surgeons Clinical Practice Guidelines.与美国矫形外科医师学会临床实践指南相比,ChatGPT和谷歌Gemini在提供髋关节发育不良管理建议方面存在临床不足。
J Pediatr Soc North Am. 2024 Dec 9;10:100135. doi: 10.1016/j.jposna.2024.100135. eCollection 2025 Feb.