Suppr超能文献

应用于在线健康信息的质量指标的观察者间一致性。

Inter-observer agreement for quality measures applied to online health information.

作者信息

Sagaram Smitha, Walji Muhammad, Meric-Bernstam Funda, Johnson Craig, Bernstam Elmer

机构信息

School of Health Information Sciences, The University of Texas Health Science Center at Houston, 77030, USA.

出版信息

Stud Health Technol Inform. 2004;107(Pt 2):1308-12.

Abstract

Many quality criteria have been developed to rate the quality of online health information. However, few instruments have been validated for inter-observer reliability. Therefore, we assessed the degree to which two raters agree upon the presence or absence of information based on 22 popularly cited quality criteria on a sample of 21 complementary and alternative medicine websites. Our preliminary analysis showed a poor inter-rater agreement on 10 out of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the allowed choices and defined a location to look for the information. As a result 15 out of the 22 quality criteria had a kappa >0.6. We conclude that even with precise definitions some commonly used quality criteria to assess the quality of health information online cannot be reliably assessed. However, inter-rater agreement can be improved by providing precise operational definitions.

摘要

已经制定了许多质量标准来评估在线健康信息的质量。然而,很少有工具经过验证具有观察者间的可靠性。因此,我们根据22条广泛引用的质量标准,对21个补充和替代医学网站的样本,评估了两名评估者在信息存在与否方面的一致程度。我们的初步分析表明,在22条质量标准中的10条上,评估者间的一致性较差。因此,我们为每条标准创建了操作定义,减少了允许的选择,并定义了查找信息的位置。结果,22条质量标准中有15条的kappa值>0.6。我们得出结论,即使有精确的定义,一些用于评估在线健康信息质量的常用质量标准也无法得到可靠评估。然而,通过提供精确的操作定义,可以提高评估者间的一致性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验