• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

评估 Behçet 病相关在线信息的可读性、质量和可靠性。

Evaluating the readability, quality and reliability of online information on Behçet's disease.

机构信息

Physical Medicine and Rehabilitation, Dokuz Eylul University, İzmir.

Anesthesiology and Reanimation, Subdivision of Critical Care Medicine, Dokuz Eylul University, İzmir.

出版信息

Reumatismo. 2022 Sep 13;74(2). doi: 10.4081/reumatismo.2022.1495.

DOI:10.4081/reumatismo.2022.1495
PMID:36101989
Abstract

There are concerns over the reliability and comprehensibility of health-related information on the internet. The goal of our research was to analyze the readability, reliability, and quality of information obtained from websites associated with Behçet's disease (BD). On September 20, 2021, the term BD was used to perform a search on Google, and 100 eligible websites were identified. The Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and Gunning Fog (GFOG) were used to evaluate the readability of the website. The JAMA score was used to assess the websites' reliability, the DISCERN score and the Health on the Net Foundation code of conduct (HONcode) were used to assess quality, and Alexa was used to analyze their popularity. Sections of the text were evaluated, and the results revealed that the mean FRES was 35.49±14.42 (difficult) and the mean GFOG was 14.93±3.13 years (very difficult). According to the JAMA scores, 36% of the websites had a high reliability rating and 20% adhered to the HONcode. The readability was found to significantly differ from the reliability of the websites (p<0.05). Moreover, websites with scientific content were found to have higher readability and reliability (p<0.05). The readability of BD-related information on the Internet was found to be considerably higher than that recommended by the National Health Institute's Grade 6, with moderate reliability and good quality. We believe that online information should have some level of readability and must have reliable content that is appropriate to educate the public, particularly for websites that provide with patient education material.

摘要

人们对互联网上与健康相关信息的可靠性和可理解性存在担忧。我们的研究目的是分析与 Behçet 病(BD)相关的网站所获取信息的可读性、可靠性和质量。2021 年 9 月 20 日,使用 BD 一词在 Google 上进行搜索,确定了 100 个符合条件的网站。使用 Flesch 阅读容易度评分(FRES)、Flesch-Kincaid 年级水平(FKGL)和 Gunning Fog(GFOG)评估网站的可读性。使用 JAMA 评分评估网站的可靠性,使用 DISCERN 评分和健康互联网基金会行为准则(HONcode)评估质量,并使用 Alexa 分析其受欢迎程度。评估了文本的各个部分,结果显示,平均 FRES 为 35.49±14.42(困难),平均 GFOG 为 14.93±3.13 岁(非常困难)。根据 JAMA 评分,36%的网站具有高可靠性评级,20%符合 HONcode。网站的可读性与可靠性存在显著差异(p<0.05)。此外,具有科学内容的网站具有更高的可读性和可靠性(p<0.05)。研究发现,互联网上与 BD 相关的信息的可读性明显高于美国国立卫生研究院 6 级推荐水平,具有中等可靠性和良好的质量。我们认为,在线信息应该具有一定的可读性,并且必须具有可靠的内容,以适当地教育公众,特别是对于提供患者教育材料的网站。

相似文献

1
Evaluating the readability, quality and reliability of online information on Behçet's disease.评估 Behçet 病相关在线信息的可读性、质量和可靠性。
Reumatismo. 2022 Sep 13;74(2). doi: 10.4081/reumatismo.2022.1495.
2
Evaluating the readability, quality and reliability of online patient education materials on chronic low back pain.评估在线慢性下背痛患者教育资料的可读性、质量和可靠性。
Natl Med J India. 2024 May-Jun;37(3):124-130. doi: 10.25259/NMJI_327_2022.
3
Evaluating the readability, quality and reliability of online patient education materials on post-covid pain.评估在线新冠后疼痛患者教育材料的易读性、质量和可靠性。
PeerJ. 2022 Jul 20;10:e13686. doi: 10.7717/peerj.13686. eCollection 2022.
4
Evaluating the Readability, Quality, and Reliability of Online Patient Education Materials on Spinal Cord Stimulation.评估在线脊髓刺激患者教育材料的可读性、质量和可靠性。
Turk Neurosurg. 2024;34(4):588-599. doi: 10.5137/1019-5149.JTN.42973-22.3.
5
Evaluating the readability, quality and reliability of online patient education materials on transcutaneuous electrical nerve stimulation (TENS).评估经皮神经电刺激(TENS)在线患者教育材料的可读性、质量和可靠性。
Medicine (Baltimore). 2023 Apr 21;102(16):e33529. doi: 10.1097/MD.0000000000033529.
6
Assessing parental comprehension of online resources on childhood pain.评估父母对儿童疼痛在线资源的理解程度。
Medicine (Baltimore). 2024 Jun 21;103(25):e38569. doi: 10.1097/MD.0000000000038569.
7
Quality, Reliability, Technical Quality, and Readability of Google Online Information on Childhood Glaucoma.谷歌在线儿童青光眼信息的质量、可靠性、技术质量和可读性。
J Pediatr Ophthalmol Strabismus. 2024 May-Jun;61(3):198-203. doi: 10.3928/01913913-20231114-01. Epub 2023 Dec 19.
8
Readability assessment of online tracheostomy care resources.在线气管造口护理资源的可读性评估。
Otolaryngol Head Neck Surg. 2015 Feb;152(2):272-8. doi: 10.1177/0194599814560338. Epub 2014 Dec 1.
9
How readable and quality are online patient education materials about Helicobacter pylori?: Assessment of the readability, quality and reliability.关于幽门螺杆菌的在线患者教育材料的可读性和质量如何?:评估可读性、质量和可靠性。
Medicine (Baltimore). 2023 Oct 27;102(43):e35543. doi: 10.1097/MD.0000000000035543.
10
IVC filter - assessing the readability and quality of patient information on the Internet.下腔静脉滤器 - 评估互联网上患者信息的可读性和质量。
J Vasc Surg Venous Lymphat Disord. 2024 Mar;12(2):101695. doi: 10.1016/j.jvsv.2023.101695. Epub 2023 Oct 26.

引用本文的文献

1
Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about Ankylosing spondylitis.评估ChatGPT、Gemini和Perplexity针对强直性脊柱炎最常见问题生成的回答的可读性、质量和可靠性。
PLoS One. 2025 Jun 18;20(6):e0326351. doi: 10.1371/journal.pone.0326351. eCollection 2025.
2
Readability, reliability and quality of responses generated by ChatGPT, gemini, and perplexity for the most frequently asked questions about pain.ChatGPT、Gemini和Perplexity针对最常见疼痛问题生成的回答的可读性、可靠性和质量。
Medicine (Baltimore). 2025 Mar 14;104(11):e41780. doi: 10.1097/MD.0000000000041780.
3
Assessing the readability, quality and reliability of responses produced by ChatGPT, Gemini, and Perplexity regarding most frequently asked keywords about low back pain.
评估ChatGPT、Gemini和Perplexity针对有关腰痛的最常见关键词所给出回答的可读性、质量和可靠性。
PeerJ. 2025 Jan 22;13:e18847. doi: 10.7717/peerj.18847. eCollection 2025.
4
Lessons to be learned when designing comprehensible patient-oriented online information about temporomandibular disorders.设计关于颞下颌关节紊乱病的易懂的、以患者为导向的在线信息时应吸取的经验教训。
J Oral Rehabil. 2025 Feb;52(2):222-229. doi: 10.1111/joor.13798. Epub 2024 Jul 21.
5
An Analysis of the Readability of Online Sarcoidosis Resources.在线结节病资源的可读性分析
Cureus. 2024 Apr 18;16(4):e58559. doi: 10.7759/cureus.58559. eCollection 2024 Apr.
6
How artificial intelligence can provide information about subdural hematoma: Assessment of readability, reliability, and quality of ChatGPT, BARD, and perplexity responses.人工智能如何提供关于硬膜下血肿的信息:对ChatGPT、BARD和Perplexity回答的可读性、可靠性和质量评估。
Medicine (Baltimore). 2024 May 3;103(18):e38009. doi: 10.1097/MD.0000000000038009.
7
How Efficient Is ChatGPT in Accessing Accurate and Quality Health-Related Information?ChatGPT在获取准确且高质量的健康相关信息方面效率如何?
Cureus. 2023 Oct 7;15(10):e46662. doi: 10.7759/cureus.46662. eCollection 2023 Oct.
8
From quality to clarity: evaluating the effectiveness of online ınformation related to septic arthritis.从质量到清晰度:评估与脓毒性关节炎相关的在线信息的有效性。
J Orthop Surg Res. 2023 Sep 15;18(1):689. doi: 10.1186/s13018-023-04181-x.