Department of Psychological Methods, University of Amsterdam, Amsterdam, The Netherlands.
Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic.
Res Synth Methods. 2023 Jan;14(1):99-116. doi: 10.1002/jrsm.1594. Epub 2022 Aug 7.
Publication bias is a ubiquitous threat to the validity of meta-analysis and the accumulation of scientific evidence. In order to estimate and counteract the impact of publication bias, multiple methods have been developed; however, recent simulation studies have shown the methods' performance to depend on the true data generating process, and no method consistently outperforms the others across a wide range of conditions. Unfortunately, when different methods lead to contradicting conclusions, researchers can choose those methods that lead to a desired outcome. To avoid the condition-dependent, all-or-none choice between competing methods and conflicting results, we extend robust Bayesian meta-analysis and model-average across two prominent approaches of adjusting for publication bias: (1) selection models of p-values and (2) models adjusting for small-study effects. The resulting model ensemble weights the estimates and the evidence for the absence/presence of the effect from the competing approaches with the support they receive from the data. Applications, simulations, and comparisons to preregistered, multi-lab replications demonstrate the benefits of Bayesian model-averaging of complementary publication bias adjustment methods.
发表偏倚是荟萃分析和科学证据积累的有效性所面临的普遍威胁。为了估计和抵消发表偏倚的影响,已经开发了多种方法;然而,最近的模拟研究表明,这些方法的性能取决于真实的数据生成过程,并且没有一种方法在广泛的条件下始终优于其他方法。不幸的是,当不同的方法导致相互矛盾的结论时,研究人员可以选择那些导致期望结果的方法。为了避免在竞争方法和相互矛盾的结果之间进行条件相关的、非此即彼的选择,我们扩展了稳健的贝叶斯荟萃分析和模型平均,涵盖了两种调整发表偏倚的主要方法:(1)p 值选择模型和(2)调整小样本效应的模型。由此产生的模型集合根据数据提供的支持,对来自竞争方法的效应存在/不存在的估计值和证据进行加权。应用、模拟以及与预先注册的多实验室复制的比较证明了互补的发表偏倚调整方法的贝叶斯模型平均的优势。