Suppr超能文献

评估元分析方法以在效应量相关的情况下检测选择性报告。

Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes.

作者信息

Rodgers Melissa A, Pustejovsky James E

机构信息

Department of Educational Psychology, The University of Texas at Austin.

出版信息

Psychol Methods. 2020 Jul 16. doi: 10.1037/met0000300.

Abstract

Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the trim and fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger's regression test that incorporate robust variance estimation (RVE) or multilevel meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

摘要

基于结果的统计显著性进行选择性报告,会威胁到荟萃分析结果的有效性。有多种用于检测选择性报告、发表偏倚或小研究效应的技术,并且在研究综合分析中经常使用。大多数此类技术都是单变量的,因为它们假设每项研究为荟萃分析贡献一个单一的、独立的效应量估计值。然而,在实际中,研究通常会贡献多个统计相关的效应量估计值,例如针对共同结果构念的多个测量指标。有许多方法可用于对相关效应量进行荟萃分析,但在处理效应量相关性的同时研究选择性报告的方法仍需进一步研究。我们使用蒙特卡罗模拟,评估三种现有的用于检测小研究效应或选择性报告的单变量检验,包括修剪与填充检验、埃格回归检验以及来自三参数选择模型(3PSM)的似然比检验,检验中分别采用忽略相关性或使用特设技术处理相关性的情况。我们还考察了埃格回归检验的两种变体,它们纳入了稳健方差估计(RVE)或多水平荟萃分析(MLMA)来处理相关性。模拟结果表明,忽略相关性会使所有单变量检验的I型错误率膨胀。当使用RVE或MLMA对相关效应量进行抽样或处理时,埃格回归的变体能够维持I型错误率。3PSM似然比检验不能完全控制I型错误率。除3PSM外,所有方法检测选择偏倚的能力都有限,除非在对统计显著性效应进行强选择的情况下。(《心理学文摘数据库记录》(c)2020美国心理学会,保留所有权利)

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验