Suppr超能文献

二十年来心理学论文可重复性的全学科调查。

A discipline-wide investigation of the replicability of Psychology papers over the past two decades.

机构信息

Department of Psychology and Human Development, Institute of Education, University College London, London WC1H 0AL.

Mendoza College of Business, University of Notre Dame, Notre Dame, IN 46556.

出版信息

Proc Natl Acad Sci U S A. 2023 Feb 7;120(6):e2208863120. doi: 10.1073/pnas.2208863120. Epub 2023 Jan 30.

Abstract

Conjecture about the weak replicability in social sciences has made scholars eager to quantify the scale and scope of replication failure for a discipline. Yet small-scale manual replication methods alone are ill-suited to deal with this big data problem. Here, we conduct a discipline-wide replication census in science. Our sample ( = 14,126 papers) covers nearly all papers published in the six top-tier Psychology journals over the past 20 y. Using a validated machine learning model that estimates a paper's likelihood of replication, we found evidence that both supports and refutes speculations drawn from a relatively small sample of manual replications. First, we find that a single overall replication rate of Psychology poorly captures the varying degree of replicability among subfields. Second, we find that replication rates are strongly correlated with research methods in all subfields. Experiments replicate at a significantly lower rate than do non-experimental studies. Third, we find that authors' cumulative publication number and citation impact are positively related to the likelihood of replication, while other proxies of research quality and rigor, such as an author's university prestige and a paper's citations, are unrelated to replicability. Finally, contrary to the ideal that media attention should cover replicable research, we find that media attention is positively related to the likelihood of replication failure. Our assessments of the scale and scope of replicability are important next steps toward broadly resolving issues of replicability.

摘要

关于社会科学中弱可复制性的推测,使得学者们渴望量化一个学科的复制失败的规模和范围。然而,仅靠小规模的手动复制方法并不适合处理这个大数据问题。在这里,我们对科学进行了一次全学科的复制普查。我们的样本(= 14126 篇论文)涵盖了过去 20 年中六个顶级心理学期刊上发表的几乎所有论文。我们使用了一种经过验证的机器学习模型来估计一篇论文的复制可能性,该模型发现了既支持又反驳从相对较小的手动复制样本中得出的推测的证据。首先,我们发现,心理学的单一总体复制率并不能很好地捕捉到各个子领域之间不同程度的可复制性。其次,我们发现,在所有子领域中,复制率与研究方法密切相关。实验的复制率明显低于非实验研究。第三,我们发现,作者的累积出版数量和引用影响力与复制的可能性呈正相关,而其他研究质量和严谨性的代理指标,如作者的大学声望和论文的引用,与可复制性无关。最后,与媒体关注应该涵盖可复制研究的理想相反,我们发现媒体关注与复制失败的可能性呈正相关。我们对可复制性的规模和范围的评估是朝着广泛解决可复制性问题迈出的重要下一步。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2d97/9963456/662eca330228/pnas.2208863120fig01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验