• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

评估在线新冠后疼痛患者教育材料的易读性、质量和可靠性。

Evaluating the readability, quality and reliability of online patient education materials on post-covid pain.

机构信息

Department of Physical Medicine and Rehabilitation, Algology, Dokuz Eylül University, Izmir, Turkey.

Department of Anesthesiology and Reanimation, Dokuz Eylül University, Izmir, Turkey.

出版信息

PeerJ. 2022 Jul 20;10:e13686. doi: 10.7717/peerj.13686. eCollection 2022.

DOI:10.7717/peerj.13686
PMID:35880220
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9308460/
Abstract

BACKGROUND

The use of the Internet to access healthcare-related information is increasing day by day. However, there are concerns regarding the reliability and comprehensibility of this information. This study aimed to investigate the readability, reliability, and quality of Internet-based patient educational materials (PEM) related to "post-COVID-19 pain."

METHODS

One-hundred websites that fit the purposes of the study were identified by searching for the terms "post-COVID-19 pain" and "pain after COVID-19" using the Google search engine on February 24, 2022. The website readability was assessed using the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and Gunning FOG (GFOG). The reliability, quality, and popularity of the websites were assessed using the JAMA score, DISCERN score/Health on the Net Foundation code of conduct, and Alexa, respectively.

RESULTS

Upon investigation of the textual contents, the mean FRES was 51.40 ± 10.65 (difficult), the mean FKGL and SMOG were 10.93 ± 2.17 and 9.83 ± 1.66 years, respectively, and the mean GFOG was 13.14 ± 2.16 (very difficult). Furthermore, 24.5% of the websites were highly reliable according to JAMA scores, 8% were of high quality according to GQS values, and 10% were HONcode-compliant. There was a statistically significant difference between the website types and reliability ( = 0.003) and quality scores ( = 0.002).

CONCLUSION

The readability level of PEM on post-COVID-19 pain was considerably higher than grade 6 educational level, as recommended by the National Institutes of Health, and had low reliability and poor quality. We suggest that Internet-based PEM should have a certain degree of readability that is in accordance with the educational level of the general public and feature reliable content.

摘要

背景

人们每天都在通过互联网获取与医疗保健相关的信息,这一现象变得愈发普遍。然而,人们对这些信息的可靠性和可理解性存在顾虑。本研究旨在调查与“新冠后疼痛”相关的基于互联网的患者教育材料(PEM)的易读性、可靠性和质量。

方法

于 2022 年 2 月 24 日,通过在谷歌搜索引擎上搜索“post-COVID-19 pain”和“pain after COVID-19”这两个术语,我们确定了 100 个符合研究目的的网站。我们使用 Flesch 阅读容易度得分(FRES)、Flesch-Kincaid 年级水平(FKGL)、简易斯莫格测试(SMOG)和加农 Fog 测试(GFOG)评估网站的易读性。使用 JAMA 评分、DISCERN 评分/健康网络基金会行为准则和 Alexa 分别评估网站的可靠性、质量和受欢迎程度。

结果

在调查文本内容时,平均 FRES 为 51.40±10.65(困难),平均 FKGL 和 SMOG 分别为 10.93±2.17 和 9.83±1.66 年,平均 GFOG 为 13.14±2.16(非常困难)。此外,根据 JAMA 评分,24.5%的网站被认为非常可靠,8%的网站根据 GQS 值被认为质量很高,10%的网站符合 HONcode 标准。网站类型和可靠性(=0.003)以及质量评分(=0.002)之间存在统计学差异。

结论

新冠后疼痛相关 PEM 的可读性水平明显高于美国国立卫生研究院推荐的 6 年级教育水平,且其可靠性和质量较差。我们建议,基于互联网的 PEM 的可读性应与公众的教育水平相符,并具有可靠的内容。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/29ea7717db6c/peerj-10-13686-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/4542ff8eda5c/peerj-10-13686-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/d17d6149b3c7/peerj-10-13686-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/ada4570aba35/peerj-10-13686-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/29ea7717db6c/peerj-10-13686-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/4542ff8eda5c/peerj-10-13686-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/d17d6149b3c7/peerj-10-13686-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/ada4570aba35/peerj-10-13686-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3dd/9308460/29ea7717db6c/peerj-10-13686-g004.jpg

相似文献

1
Evaluating the readability, quality and reliability of online patient education materials on post-covid pain.评估在线新冠后疼痛患者教育材料的易读性、质量和可靠性。
PeerJ. 2022 Jul 20;10:e13686. doi: 10.7717/peerj.13686. eCollection 2022.
2
Assessing parental comprehension of online resources on childhood pain.评估父母对儿童疼痛在线资源的理解程度。
Medicine (Baltimore). 2024 Jun 21;103(25):e38569. doi: 10.1097/MD.0000000000038569.
3
Evaluating the Readability, Quality, and Reliability of Online Patient Education Materials on Spinal Cord Stimulation.评估在线脊髓刺激患者教育材料的可读性、质量和可靠性。
Turk Neurosurg. 2024;34(4):588-599. doi: 10.5137/1019-5149.JTN.42973-22.3.
4
Evaluating the readability, quality and reliability of online information on Behçet's disease.评估 Behçet 病相关在线信息的可读性、质量和可靠性。
Reumatismo. 2022 Sep 13;74(2). doi: 10.4081/reumatismo.2022.1495.
5
Evaluating the readability, quality and reliability of online patient education materials on transcutaneuous electrical nerve stimulation (TENS).评估经皮神经电刺激(TENS)在线患者教育材料的可读性、质量和可靠性。
Medicine (Baltimore). 2023 Apr 21;102(16):e33529. doi: 10.1097/MD.0000000000033529.
6
How readable and quality are online patient education materials about Helicobacter pylori?: Assessment of the readability, quality and reliability.关于幽门螺杆菌的在线患者教育材料的可读性和质量如何?:评估可读性、质量和可靠性。
Medicine (Baltimore). 2023 Oct 27;102(43):e35543. doi: 10.1097/MD.0000000000035543.
7
Readability assessment of online tracheostomy care resources.在线气管造口护理资源的可读性评估。
Otolaryngol Head Neck Surg. 2015 Feb;152(2):272-8. doi: 10.1177/0194599814560338. Epub 2014 Dec 1.
8
Evaluating the readability, quality and reliability of online patient education materials on chronic low back pain.评估在线慢性下背痛患者教育资料的可读性、质量和可靠性。
Natl Med J India. 2024 May-Jun;37(3):124-130. doi: 10.25259/NMJI_327_2022.
9
Online Patient Information for Hysterectomies: A Systematic Environmental Scan of Quality and Readability.在线子宫切除术患者信息:系统环境扫描质量和可读性。
J Obstet Gynaecol Can. 2022 Aug;44(8):870-876. doi: 10.1016/j.jogc.2022.03.015. Epub 2022 Apr 26.
10
Assessment of the Readability of the Online Patient Education Materials of Intensive and Critical Care Societies.评估重症监护学会在线患者教育材料的可读性。
Crit Care Med. 2024 Feb 1;52(2):e47-e57. doi: 10.1097/CCM.0000000000006121. Epub 2023 Nov 13.

引用本文的文献

1
Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about Ankylosing spondylitis.评估ChatGPT、Gemini和Perplexity针对强直性脊柱炎最常见问题生成的回答的可读性、质量和可靠性。
PLoS One. 2025 Jun 18;20(6):e0326351. doi: 10.1371/journal.pone.0326351. eCollection 2025.
2
How Successful Is AI in Developing Postsurgical Wound Care Education Material?人工智能在开发术后伤口护理教育材料方面有多成功?
Wound Repair Regen. 2025 May-Jun;33(3):e70041. doi: 10.1111/wrr.70041.
3
Readability, reliability and quality of responses generated by ChatGPT, gemini, and perplexity for the most frequently asked questions about pain.

本文引用的文献

1
Long Haul COVID-19 Videos on YouTube: Implications for Health Communication.YouTube 上的长新冠视频:对健康传播的启示。
J Community Health. 2022 Aug;47(4):610-615. doi: 10.1007/s10900-022-01086-4. Epub 2022 Apr 12.
2
Online Information of COVID-19: Visibility and Characterization of Highest Positioned Websites by Google between March and April 2020-A Cross-Country Analysis.2020 年 3 月至 4 月期间谷歌对 COVID-19 最高排名网站的在线信息:可见性和特征分析——一项跨国研究
Int J Environ Res Public Health. 2022 Jan 28;19(3):1491. doi: 10.3390/ijerph19031491.
3
Official Websites Providing Information on COVID-19 Vaccination: Readability and Content Analysis.
ChatGPT、Gemini和Perplexity针对最常见疼痛问题生成的回答的可读性、可靠性和质量。
Medicine (Baltimore). 2025 Mar 14;104(11):e41780. doi: 10.1097/MD.0000000000041780.
4
A critical analysis of online patient-directed resources on catheter ablation for ventricular arrhythmias.对心室心律失常导管消融在线患者导向资源的批判性分析。
J Arrhythm. 2025 Mar 6;41(2):e70026. doi: 10.1002/joa3.70026. eCollection 2025 Apr.
5
Assessing the readability, quality and reliability of responses produced by ChatGPT, Gemini, and Perplexity regarding most frequently asked keywords about low back pain.评估ChatGPT、Gemini和Perplexity针对有关腰痛的最常见关键词所给出回答的可读性、质量和可靠性。
PeerJ. 2025 Jan 22;13:e18847. doi: 10.7717/peerj.18847. eCollection 2025.
6
An infodemiologic review of internet resources on dental hypersensitivity: A quality and readability assessment.关于牙齿过敏症互联网资源的信息流行病学综述:质量与可读性评估
PLoS One. 2025 Jan 24;20(1):e0312832. doi: 10.1371/journal.pone.0312832. eCollection 2025.
7
Quality Assessment of Medical Institutions' Websites Regarding Prescription Drug Misuse of Glucagon-Like Peptide-1 Receptor Agonists by Off-Label Use for Weight Loss: Website Evaluation Study.医疗机构网站关于胰高血糖素样肽-1受体激动剂用于减肥的非标签使用导致处方药滥用的质量评估:网站评估研究
JMIR Form Res. 2025 Jan 1;9:e68792. doi: 10.2196/68792.
8
Performance Assessment of GPT 4.0 on the Japanese Medical Licensing Examination.GPT 4.0在日本医师执照考试中的性能评估。
Curr Med Sci. 2024 Dec;44(6):1148-1154. doi: 10.1007/s11596-024-2932-9. Epub 2024 Oct 26.
9
How artificial intelligence can provide information about subdural hematoma: Assessment of readability, reliability, and quality of ChatGPT, BARD, and perplexity responses.人工智能如何提供关于硬膜下血肿的信息:对ChatGPT、BARD和Perplexity回答的可读性、可靠性和质量评估。
Medicine (Baltimore). 2024 May 3;103(18):e38009. doi: 10.1097/MD.0000000000038009.
10
Contents analysis of thyroid cancer-related information uploaded to YouTube by physicians in Korea: endorsing thyroid cancer screening, potentially leading to overdiagnosis.韩国医生在 YouTube 上上传的甲状腺癌相关信息的内容分析:支持甲状腺癌筛查,可能导致过度诊断。
BMC Public Health. 2024 Apr 2;24(1):942. doi: 10.1186/s12889-024-18403-2.
官方网站提供 COVID-19 疫苗接种信息:可读性和内容分析。
JMIR Public Health Surveill. 2022 Mar 15;8(3):e34003. doi: 10.2196/34003.
4
Time course prevalence of post-COVID pain symptoms of musculoskeletal origin in patients who had survived severe acute respiratory syndrome coronavirus 2 infection: a systematic review and meta-analysis.在幸存严重急性呼吸综合征冠状病毒 2 感染的患者中,肌肉骨骼源性新冠后疼痛症状的时间进程流行率:系统评价和荟萃分析。
Pain. 2022 Jul 1;163(7):1220-1231. doi: 10.1097/j.pain.0000000000002496. Epub 2021 Sep 23.
5
Kyphosis-Related Information On The Internet Is the Quality, Content and Readability Sufficient for the Patients?互联网上关于脊柱后凸的信息对患者来说质量、内容和可读性是否足够?
Global Spine J. 2022 Apr;12(3):476-482. doi: 10.1177/21925682211015955. Epub 2021 May 12.
6
Pain Symptoms in COVID-19.新型冠状病毒肺炎相关疼痛症状。
Am J Phys Med Rehabil. 2021 Apr 1;100(4):307-312. doi: 10.1097/PHM.0000000000001699.
7
Quality and readability of web-based Arabic health information on COVID-19: an infodemiological study.基于网络的 COVID-19 阿拉伯文健康信息的质量和可读性:一项信息流行病学研究。
BMC Public Health. 2021 Jan 18;21(1):151. doi: 10.1186/s12889-021-10218-9.
8
Health websites on COVID-19: are they readable and credible enough to help public self-care?新冠疫情健康网站:它们是否足够易于阅读且值得信赖,从而能帮助公众进行自我护理?
J Med Libr Assoc. 2021 Jan 1;109(1):75-83. doi: 10.5195/jmla.2021.1020.
9
Evaluation of Patient's Knowledge, Attitude, and Practice of Cross-Infection Control in Dentistry during COVID-19 Pandemic.2019年冠状病毒病大流行期间牙科患者交叉感染控制知识、态度及行为的评估
Eur J Dent. 2020 Dec;14(S 01):S1-S6. doi: 10.1055/s-0040-1721295. Epub 2020 Dec 15.
10
Readability of online COVID-19 health information: a comparison between four English speaking countries.在线 COVID-19 健康信息的易读性:四个英语国家之间的比较。
BMC Public Health. 2020 Nov 13;20(1):1635. doi: 10.1186/s12889-020-09710-5.