School of Psychology and Centre for Brain Research, The University of Auckland.
J Exp Psychol Gen. 2019 Sep;148(9):1615-1627. doi: 10.1037/xge0000504. Epub 2018 Nov 29.
Recent failed attempts to replicate numerous findings in psychology have raised concerns about methodological practices in the behavioral sciences. More caution appears to be required when evaluating single studies, while systematic replications and meta-analyses are being encouraged. Here, we provide an additional element to this ongoing discussion, by proposing that typical assumptions of meta-analyses be substantiated. Specifically, we argue that when effects come from more than one underlying distributions, meta-analytic averages extracted from a series of studies can be deceptive, with potentially detrimental consequences. The underlying distribution properties, we propose, should be modeled, based on the variability in a given population of effect sizes. We describe how to test for the plurality of distribution modes adequately, how to use the resulting probabilistic assessments to refine evaluations of a body of evidence, and discuss why current models are insufficient in addressing these concerns. We also consider the advantages and limitations of this method, and demonstrate how systematic testing could lead to stronger inferences. Additional material with details regarding all the examples, algorithm, and code is provided online to facilitate replication and to allow broader use across the field of psychology. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
最近,许多心理学发现的复制尝试都失败了,这引发了人们对行为科学中方法学实践的关注。在评估单个研究时,似乎需要更加谨慎,同时也鼓励进行系统的复制和元分析。在这里,我们通过提出应该证实元分析的典型假设,为正在进行的讨论提供了一个额外的元素。具体来说,我们认为,当效应来自多个潜在分布时,从一系列研究中提取的元分析平均值可能具有欺骗性,并可能产生不利影响。我们提出,应该根据效应大小的给定总体的可变性来对潜在分布特性进行建模。我们描述了如何充分检验多种分布模式,如何使用由此产生的概率评估来完善对证据的评估,并讨论了为什么当前的模型不足以解决这些问题。我们还考虑了这种方法的优点和局限性,并展示了系统测试如何导致更有力的推断。有关所有示例、算法和代码的详细信息的其他材料在线提供,以促进复制,并允许在心理学领域更广泛地使用。(PsycINFO 数据库记录(c)2019 APA,保留所有权利)。