Suppr超能文献

偏倚风险工具的内部共识一致性的真实世界评估:一项使用干预性非随机研究偏倚风险(ROBINS-I)的案例研究

Real-world evaluation of interconsensus agreement of risk of bias tools: A case study using risk of bias in nonrandomized studies-of interventions (ROBINS-I).

作者信息

Saadi Samer, Hasan Bashar, Kanaan Adel, Abusalih Mohamed, Tarakji Zin, Sadek Mustafa, Shamsi Basha Ayla, Firwana Mohammed, Wang Zhen, Murad M Hassan

机构信息

Evidence-Based Practice Center Mayo Clinic Rochester Minnesota USA.

Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery Mayo Clinic Rochester Minnesota USA.

出版信息

Cochrane Evid Synth Methods. 2024 Jun 26;2(7):e12094. doi: 10.1002/cesm.12094. eCollection 2024 Jul.

Abstract

BACKGROUND

Risk of bias (RoB) tools are critical in systematic reviews and affect subsequent decision-making. RoB tools should have adequate interrater reliability and interconsensus agreement. We present an approach of post hoc evaluation of RoB tools using duplicated studies that overlap systematic reviews.

METHODS

Using a back-citation approach, we identified systematic reviews that used the Risk Of Bias In Nonrandomized Studies-of Interventions (ROBINS-I) tool and retrieved all the included primary studies. We selected studies that were appraised by more than one systematic review and calculated observed agreement and unweighted kappa comparing the different systematic reviews' assessments.

RESULTS

We identified 903 systematic reviews that used the tool with 51,676 cited references, from which we eventually analyzed 171 duplicated studies assessed using ROBINS-I by different systematic reviewers. The observed agreement on ROBINS-I domains ranged from 54.9% (missing data domain) to 70.3% (deviations from intended interventions domain), and was 63.0% for overall RoB assessment of the study. Kappa coefficient ranged from 0.131 (measurement of outcome domain) to 0.396 (domains of confounding and deviations from intended interventions), and was 0.404 for overall RoB assessment of the study.

CONCLUSION

A post hoc evaluation of RoB tools is feasible by focusing on duplicated studies that overlap systematic review. ROBINS-I assessments demonstrated considerable variation in interconsensus agreement among various systematic reviewes that assessed the same study and outcome, suggesting the need for more intensive upfront work to calibrate systematic reviewers on how to identify context-specific information and agree on how to judge it.

摘要

背景

偏倚风险(RoB)工具在系统评价中至关重要,并会影响后续决策。RoB工具应具有足够的评价者间信度和评价者间一致性。我们提出了一种利用与系统评价重叠的重复研究对RoB工具进行事后评估的方法。

方法

采用反向引用法,我们识别出使用非随机干预研究中的偏倚风险(ROBINS-I)工具的系统评价,并检索了所有纳入的原始研究。我们选择了由多个系统评价评估的研究,并计算了不同系统评价评估结果之间的观察一致性和未加权kappa系数。

结果

我们识别出903项使用该工具的系统评价,涉及51676条被引用参考文献,最终我们分析了171项由不同系统评价者使用ROBINS-I评估的重复研究。在ROBINS-I各领域的观察一致性从54.9%(缺失数据领域)到70.3%(与预期干预的偏差领域)不等,该研究总体RoB评估的观察一致性为63.0%。kappa系数从0.131(结局测量领域)到0.396(混杂和与预期干预的偏差领域)不等,该研究总体RoB评估的kappa系数为0.404。

结论

通过关注与系统评价重叠的重复研究对RoB工具进行事后评估是可行的。ROBINS-I评估显示,在评估同一研究和结局的不同系统评价之间,评价者间一致性存在相当大的差异,这表明需要进行更深入的前期工作,以指导系统评价者如何识别特定背景信息,并就如何判断达成一致。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14b4/11795881/ad49cba98746/CESM-2-e12094-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验