J Pers Soc Psychol. 2017 Nov;113(5):768. doi: 10.1037/pspi0000116.
Reports an error in "Replicability and other features of a high-quality science: Toward a balanced and empirical approach" by Eli J. Finkel, Paul W. Eastwick and Harry T. Reis (, 2017[Aug], Vol 113[2], 244-253). In the commentary, there was an error in the References list. The publishing year for the 18th article was cited incorrectly as 2016. The in-text acronym associated with this citation should read instead as LCL2017. The correct References list citation should read as follows: LeBel, E. P., Campbell, L., & Loving, T. J. (2017). Benefits of open and high-powered research outweigh costs. , 113, 230-243. http://dx.doi.org/10 .1037/pspi0000049. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2017-30567-002.) Finkel, Eastwick, and Reis (2015; FER2015) argued that psychological science is better served by responding to apprehensions about replicability rates with contextualized solutions than with one-size-fits-all solutions. Here, we extend FER2015's analysis to suggest that much of the discussion of best research practices since 2011 has focused on a single feature of high-quality science-replicability-with insufficient sensitivity to the implications of recommended practices for other features, like discovery, internal validity, external validity, construct validity, consequentiality, and cumulativeness. Thus, although recommendations for bolstering replicability have been innovative, compelling, and abundant, it is difficult to evaluate their impact on our science as a whole, especially because many research practices that are beneficial for some features of scientific quality are harmful for others. For example, FER2015 argued that bigger samples are generally better, but also noted that very large samples ("those larger than required for effect sizes to stabilize"; p. 291) could have the downside of commandeering resources that would have been better invested in other studies. In their critique of FER2015, LeBel, Campbell, and Loving (2016) concluded, based on simulated data, that ever-larger samples are better for the efficiency of scientific discovery (i.e., that there are no tradeoffs). As demonstrated here, however, this conclusion holds only when the replicator's resources are considered in isolation. If we widen the assumptions to include the original researcher's resources as well, which is necessary if the goal is to consider resource investment for the field as a whole, the conclusion changes radically-and strongly supports a tradeoff-based analysis. In general, as psychologists seek to strengthen our science, we must complement our much-needed work on increasing replicability with careful attention to the other features of a high-quality science. (PsycINFO Database Record
报告了一篇名为“高质量科学的可重复性和其他特征:走向平衡和经验主义方法”(Eli J. Finkel、Paul W. Eastwick 和 Harry T. Reis 著,2017 年 8 月,第 113 卷[2],第 244-253 页)中的错误。在评论中,参考文献列表中的一个错误。第 18 篇文章的出版年份错误地引用为 2016 年。与该引文相关的内文本缩写应改为 LCL2017。正确的参考文献列表引文应如下所示:LeBel,E.P.,Campbell,L.,& Loving,T.J.(2017)。开放和高影响力研究的好处超过了成本。《心理科学》,113,230-243。http://dx.doi.org/10.1037/pspi0000049。该文章的在线版本已更正。(原始文章的以下摘要出现在记录 2017-30567-002 中。)Finkel、Eastwick 和 Reis(2015 年;FER2015)认为,心理学科学通过对可复制性率的担忧做出情境化的解决方案,而不是一刀切的解决方案,会得到更好的服务。在这里,我们扩展了 FER2015 的分析,表明自 2011 年以来,关于最佳研究实践的大部分讨论都集中在高质量科学的一个特征上——可复制性——而对建议实践对其他特征的影响(如发现、内部有效性、外部有效性、构建有效性、后果性和累积性)的敏感性不足。因此,尽管提高可复制性的建议具有创新性、说服力和丰富性,但很难评估它们对我们整个科学的影响,特别是因为许多有益于科学质量某些特征的研究实践对其他特征有害。例如,FER2015 认为更大的样本通常更好,但也指出,非常大的样本(“大于稳定效应大小所需的样本”;第 291 页)可能会有负面影响,因为它们会占用本可以投资于其他研究的资源。在对 FER2015 的批评中,LeBel、Campbell 和 Loving(2016)根据模拟数据得出结论,对于科学发现的效率(即没有权衡),样本越大越好。然而,正如这里所示,只有当复制者的资源被孤立考虑时,这个结论才成立。如果我们将假设扩大到包括原始研究人员的资源,那么如果目标是考虑整个领域的资源投资,那么结论就会发生根本性的变化——并且强烈支持基于权衡的分析。一般来说,随着心理学家寻求加强我们的科学,我们必须在增加可重复性方面的工作之外,还要谨慎关注高质量科学的其他特征。(PsycINFO 数据库记录)