• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
2
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
3
Testing the risk of bias tool showed low reliability between individual reviewers and across consensus assessments of reviewer pairs.测试偏倚风险工具显示,个体评审员之间以及评审员对之间的共识评估的可靠性较低。
J Clin Epidemiol. 2013 Sep;66(9):973-81. doi: 10.1016/j.jclinepi.2012.07.005. Epub 2012 Sep 13.
4
Testing the Newcastle Ottawa Scale showed low reliability between individual reviewers.信度检验表明,个体评审员之间的纽卡斯尔-渥太华量表评分可靠性较低。
J Clin Epidemiol. 2013 Sep;66(9):982-93. doi: 10.1016/j.jclinepi.2013.03.003. Epub 2013 May 16.
5
Poor reliability between Cochrane reviewers and blinded external reviewers when applying the Cochrane risk of bias tool in physical therapy trials.在物理治疗试验中应用Cochrane偏倚风险工具时,Cochrane综述作者与盲法外部评审者之间的可靠性较差。
PLoS One. 2014 May 13;9(5):e96920. doi: 10.1371/journal.pone.0096920. eCollection 2014.
6
Inter-rater reliability and validity of risk of bias instrument for non-randomized studies of exposures: a study protocol.暴露因素非随机研究偏倚风险评估工具的评价者间信度和效度:研究方案。
Syst Rev. 2020 Feb 12;9(1):32. doi: 10.1186/s13643-020-01291-z.
7
8
Inter-rater reliability and concurrent validity of ROBINS-I: protocol for a cross-sectional study.ROBINS-I 的跨部门研究:信度和同时效度协议。
Syst Rev. 2020 Jan 13;9(1):12. doi: 10.1186/s13643-020-1271-6.
9
Applying the risk of bias tool in a systematic review of combination long-acting beta-agonists and inhaled corticosteroids for persistent asthma.在一项关于长效β-激动剂和吸入性皮质类固醇联合治疗持续性哮喘的系统评价中应用偏倚风险工具。
PLoS One. 2011 Feb 24;6(2):e17242. doi: 10.1371/journal.pone.0017242.
10
Assessor burden, inter-rater agreement and user experience of the RoB-SPEO tool for assessing risk of bias in studies estimating prevalence of exposure to occupational risk factors: An analysis from the WHO/ILO Joint Estimates of the Work-related Burden of Disease and Injury.评估者负担、RoB-SPEO 工具评估研究中暴露于职业风险因素的偏倚风险的一致性和用户体验:来自世界卫生组织/国际劳工组织联合估计工作相关疾病和伤害负担的分析。
Environ Int. 2022 Jan;158:107005. doi: 10.1016/j.envint.2021.107005. Epub 2021 Nov 30.

PMID:22536612
Abstract

BACKGROUND

Numerous tools exist to assess methodological quality, or risk of bias in systematic reviews; however, few have undergone extensive reliability or validity testing.

OBJECTIVES

(1) assess the reliability of the Cochrane Risk of Bias (ROB) tool for randomized controlled trials (RCTs) and the Newcastle-Ottawa Scale (NOS) for cohort studies between individual raters, and between consensus agreements of individual raters for the ROB tool; (2) assess the validity of the Cochrane ROB tool and NOS by examining the association between study quality and treatment effect size (ES); (3) examine the impact of study-level factors on reliability and validity.

METHODS

Two reviewers independently assessed risk of bias for 154 RCTs. For a subset of 30 RCTs, two reviewers from each of four Evidence-based Practice Centers assessed risk of bias and reached consensus. Inter-rater agreement was assessed using kappa statistics. We assessed the association between ES and risk of bias using meta-regression. We examined the impact of study-level factors on the association between risk of bias and ES using subgroup analyses. Two reviewers independently applied the NOS to 131 cohort studies from 8 meta-analyses. Inter-rater agreement was calculated using kappa statistics. Within each meta-analysis, we generated a ratio of pooled estimates for each quality domain. The ratios were combined to give an overall estimate of differences in effect estimates with inverse-variance weighting and a random effects model.

RESULTS

Inter-rater reliability between two reviewers was considered fair for most domains (κ ranging from 0.24 to 0.37), except for sequence generation (κ=0.79, substantial). Inter-rater reliability of consensus assessments across four reviewer pairs was moderate for sequence generation (κ=0.60), fair for allocation concealment and “other sources of bias” (κ=0.37, 0.27), and slight for the remaining domains (κ ranging from 0.05 to 0.09). Inter-rater variability was influenced by study-level factors including nature of outcome, nature of intervention, study design, trial hypothesis, and funding source. Inter-rater variability resulted more often from different interpretation of the tool rather than different information identified in the study reports. No statistically significant differences were found in ES when comparing studies categorized as high, unclear or low risk of bias. Inter-rater reliability of the NOS varied from substantial for length of followup to poor for selection of non-exposed cohort and demonstration that the outcome was not present at outset of study. We found no association between individual NOS items or overall NOS score and effect estimates.

CONCLUSION

More specific guidance is needed to apply risk of bias/quality tools. Study-level factors that were shown to influence agreement provide direction for detailed guidance. Low agreement across pairs of reviewers has implications for incorporation of risk of bias into results and grading the strength of evidence. Variable agreement for the NOS, and lack of evidence that it discriminates studies that may provide biased results, underscores the need for more detailed guidance to apply the tool in systematic reviews.

摘要