Suppr超能文献

QUEST 在线健康信息质量研究:一种简短定量工具的验证。

The QUEST for quality online health information: validation of a short quantitative tool.

机构信息

Division of Neurology, Department of Medicine, The University of British Columbia, Vancouver, Canada.

BC Children's & Women's Hospital, Vancouver, Canada.

出版信息

BMC Med Inform Decis Mak. 2018 Oct 19;18(1):87. doi: 10.1186/s12911-018-0668-9.

Abstract

BACKGROUND

Online health information is unregulated and can be of highly variable quality. There is currently no singular quantitative tool that has undergone a validation process, can be used for a broad range of health information, and strikes a balance between ease of use, concision and comprehensiveness. To address this gap, we developed the QUality Evaluation Scoring Tool (QUEST). Here we report on the analysis of the reliability and validity of the QUEST in assessing the quality of online health information.

METHODS

The QUEST and three existing tools designed to measure the quality of online health information were applied to two randomized samples of articles containing information about the treatment (n = 16) and prevention (n = 29) of Alzheimer disease as a sample health condition. Inter-rater reliability was assessed using a weighted Cohen's kappa (κ) for each item of the QUEST. To compare the quality scores generated by each pair of tools, convergent validity was measured using Kendall's tau (τ) ranked correlation.

RESULTS

The QUEST demonstrated high levels of inter-rater reliability for the seven quality items included in the tool (κ ranging from 0.7387 to 1.0, P < .05). The tool was also found to demonstrate high convergent validity. For both treatment- and prevention-related articles, all six pairs of tests exhibited a strong correlation between the tools (τ ranging from 0.41 to 0.65, P < .05).

CONCLUSIONS

Our findings support the QUEST as a reliable and valid tool to evaluate online articles about health. Results provide evidence that the QUEST integrates the strengths of existing tools and evaluates quality with equal efficacy using a concise, seven-item questionnaire. The QUEST can serve as a rapid, effective, and accessible method of appraising the quality of online health information for researchers and clinicians alike.

摘要

背景

在线健康信息不受监管,质量可能差异很大。目前还没有经过验证过程、可用于广泛健康信息、并在易用性、简洁性和全面性之间取得平衡的单一定量工具。为了解决这一差距,我们开发了 QUality Evaluation Scoring Tool(QUEST)。在这里,我们报告了分析 QUEST 评估在线健康信息质量的可靠性和有效性。

方法

将 QUEST 和三个旨在衡量在线健康信息质量的现有工具应用于包含阿尔茨海默病治疗(n=16)和预防(n=29)信息的随机样本文章,以评估作为健康状况的样本。使用 QUEST 的每个项目的加权 Cohen's kappa(κ)评估评分者间可靠性。为了比较每个工具对生成的质量评分,使用 Kendall's tau(τ)等级相关来衡量收敛有效性。

结果

QUEST 对工具中包含的七个质量项目表现出高度的评分者间可靠性(κ 范围从 0.7387 到 1.0,P<.05)。该工具还表现出高的收敛有效性。对于治疗和预防相关的文章,所有六个配对测试都表现出工具之间的强相关性(τ 范围从 0.41 到 0.65,P<.05)。

结论

我们的研究结果支持 QUEST 作为评估有关健康的在线文章的可靠且有效的工具。结果表明,QUEST 结合了现有工具的优势,并使用简洁的七项问卷同等有效地评估质量。QUEST 可以作为一种快速、有效和易于使用的方法,用于评估研究人员和临床医生的在线健康信息质量。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/47d1/6194721/10ae6269d93e/12911_2018_668_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验