Steiner Peter M, Sheehan Patrick, Wong Vivian C
University of Maryland, College Park.
University of Virginia.
Psychol Methods. 2025 Aug;30(4):793-814. doi: 10.1037/met0000597. Epub 2023 Jul 27.
Given recent evidence challenging the replicability of results in the social and behavioral sciences, critical questions have been raised about appropriate measures for determining replication success in comparing effect estimates across studies. At issue is the fact that conclusions about replication success often depend on the measure used for evaluating correspondence in results. Despite the importance of choosing an appropriate measure, there is still no widespread agreement about which measures should be used. This article addresses these questions by describing formally the most commonly used measures for assessing replication success, and by comparing their performance in different contexts according to their replication probabilities-that is, the probability of obtaining replication success given study-specific settings. The measures may be characterized broadly as conclusion-based approaches, which assess the congruence of two independent studies' conclusions about the presence of an effect, and distance-based approaches, which test for a significant difference or equivalence of two effect estimates. We also introduce a new measure for assessing replication success called the correspondence test, which combines a difference and equivalence test in the same framework. To help researchers plan prospective replication efforts, we provide closed formulas for power calculations that can be used to determine the minimum detectable effect size (and thus, sample sizes) for each study so that a predetermined minimum replication probability can be achieved. Finally, we use a replication data set from the Open Science Collaboration (2015) to demonstrate the extent to which conclusions about replication success depend on the correspondence measure selected. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
鉴于最近有证据对社会科学和行为科学研究结果的可重复性提出质疑,人们对在比较不同研究的效应估计时确定复制成功的适当方法提出了关键问题。问题在于,关于复制成功的结论往往取决于用于评估结果一致性的方法。尽管选择适当的方法很重要,但对于应使用哪些方法仍未达成广泛共识。本文通过正式描述评估复制成功最常用的方法,并根据其复制概率(即在特定研究设置下获得复制成功的概率)比较它们在不同情况下的表现,来解决这些问题。这些方法大致可分为基于结论的方法,即评估两项独立研究关于效应存在的结论的一致性;以及基于距离的方法,即检验两个效应估计值之间是否存在显著差异或等效性。我们还引入了一种评估复制成功的新方法,称为对应性检验,它在同一框架中结合了差异检验和等效性检验。为了帮助研究人员规划前瞻性的复制研究,我们提供了用于功效计算的封闭公式,可用于确定每项研究的最小可检测效应大小(进而确定样本量),以便实现预定的最小复制概率。最后,我们使用开放科学合作组织(2015年)的一个复制数据集来证明关于复制成功的结论在多大程度上取决于所选择的对应性度量。(《心理学文摘数据库记录》(c)2025美国心理学会,保留所有权利)