• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

挑战 N 启发式:效应大小而非样本大小,预测心理科学的可重复性。

Challenging the N-Heuristic: Effect size, not sample size, predicts the replicability of psychological science.

机构信息

Graduate School of Education, Stanford University, Stanford, California, United States of America.

School of Humanities and Social Science, The Chinese University of Hong Kong, Shenzhen, Shenzhen, China.

出版信息

PLoS One. 2024 Aug 23;19(8):e0306911. doi: 10.1371/journal.pone.0306911. eCollection 2024.

DOI:10.1371/journal.pone.0306911
PMID:39178270
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11343368/
Abstract

Large sample size (N) is seen as a key criterion in judging the replicability of psychological research, a phenomenon we refer to as the N-Heuristic. This heuristic has led to the incentivization of fast, online, non-behavioral studies-to the potential detriment of psychological science. While large N should in principle increase statistical power and thus the replicability of effects, in practice it may not. Large-N studies may have other attributes that undercut their power or validity. Consolidating data from all systematic, large-scale attempts at replication (N = 307 original-replication study pairs), we find that the original study's sample size did not predict its likelihood of being replicated (rs = -0.02, p = 0.741), even with study design and research area controlled. By contrast, effect size emerged as a substantial predictor (rs = 0.21, p < 0.001), which held regardless of the study's sample size. N may be a poor predictor of replicability because studies with larger N investigated smaller effects (rs = -0.49, p < 0.001). Contrary to these results, a survey of 215 professional psychologists, presenting them with a comprehensive list of methodological criteria, found sample size to be rated as the most important criterion in judging a study's replicability. Our findings strike a cautionary note with respect to the prioritization of large N in judging the replicability of psychological science.

摘要

大样本量(N)被视为判断心理学研究可重复性的关键标准,我们称之为 N 启发式。这种启发式导致了快速、在线、非行为研究的激励——这可能对心理学科学造成潜在的损害。虽然大 N 原则上应该增加统计能力,从而提高效应的可重复性,但实际上可能并非如此。大 N 研究可能具有其他削弱其效力或有效性的属性。我们整合了所有系统的、大规模的复制尝试的数据(N=307 个原始-复制研究对),发现原始研究的样本量并不能预测其被复制的可能性(rs=-0.02,p=0.741),即使控制了研究设计和研究领域。相比之下,效应大小成为一个重要的预测因素(rs=0.21,p<0.001),无论研究的样本量如何,这一结果都成立。N 可能是可重复性的一个糟糕预测指标,因为 N 较大的研究调查了较小的效应(rs=-0.49,p<0.001)。与这些结果相反,一项对 215 名专业心理学家的调查向他们展示了一份全面的方法学标准清单,发现样本量被评为判断研究可重复性的最重要标准。我们的发现对在判断心理学科学的可重复性时优先考虑大 N 敲响了警钟。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/e49b820e28c5/pone.0306911.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/1dff1c872313/pone.0306911.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/f140cf515b5e/pone.0306911.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/27914f4d4daf/pone.0306911.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/9179f028e8e7/pone.0306911.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/e49b820e28c5/pone.0306911.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/1dff1c872313/pone.0306911.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/f140cf515b5e/pone.0306911.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/27914f4d4daf/pone.0306911.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/9179f028e8e7/pone.0306911.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/809c/11343368/e49b820e28c5/pone.0306911.g005.jpg

相似文献

1
Challenging the N-Heuristic: Effect size, not sample size, predicts the replicability of psychological science.挑战 N 启发式:效应大小而非样本大小,预测心理科学的可重复性。
PLoS One. 2024 Aug 23;19(8):e0306911. doi: 10.1371/journal.pone.0306911. eCollection 2024.
2
Is the Political Slant of Psychology Research Related to Scientific Replicability?心理学研究的政治倾向与科学可重复性有关吗?
Perspect Psychol Sci. 2020 Nov;15(6):1310-1328. doi: 10.1177/1745691620924463. Epub 2020 Aug 19.
3
What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science.研究人员在重复研究时应期待什么?心理学领域可重复性的统计学视角。
Perspect Psychol Sci. 2016 Jul;11(4):539-44. doi: 10.1177/1745691616646366.
4
Methodological reporting behavior, sample sizes, and statistical power in studies of event-related potentials: Barriers to reproducibility and replicability.事件相关电位研究中的方法学报告行为、样本量和统计功效:可重复性和可复制性的障碍。
Psychophysiology. 2019 Nov;56(11):e13437. doi: 10.1111/psyp.13437. Epub 2019 Jul 19.
5
Estimating the deep replicability of scientific findings using human and artificial intelligence.利用人和人工智能估计科学发现的深度可重复性。
Proc Natl Acad Sci U S A. 2020 May 19;117(20):10762-10768. doi: 10.1073/pnas.1909046117. Epub 2020 May 4.
6
Sample Size, Replicability, and Pre-Test Likelihoods-Essential, Overlooked, and Critical Components of Statistical Inference: A Guide to Statistical Methods and Study Design.样本量、可重复性和预测试可能性——统计推断的基本、被忽视和关键要素:统计方法和研究设计指南。
J Neurotrauma. 2023 Oct;40(19-20):1990-1994. doi: 10.1089/neu.2022.0491. Epub 2023 Aug 3.
7
Replication concerns in sports and exercise science: a narrative review of selected methodological issues in the field.体育与运动科学中的重复研究问题:对该领域若干选定方法学问题的叙述性综述
R Soc Open Sci. 2022 Dec 14;9(12):220946. doi: 10.1098/rsos.220946. eCollection 2022 Dec.
8
High replicability of newly discovered social-behavioural findings is achievable.新发现的社会行为研究结果具有高度可复制性。
Nat Hum Behav. 2024 Feb;8(2):311-319. doi: 10.1038/s41562-023-01749-9. Epub 2023 Nov 9.
9
A discipline-wide investigation of the replicability of Psychology papers over the past two decades.二十年来心理学论文可重复性的全学科调查。
Proc Natl Acad Sci U S A. 2023 Feb 7;120(6):e2208863120. doi: 10.1073/pnas.2208863120. Epub 2023 Jan 30.
10
Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015.评估 2010 年至 2015 年期间《自然》和《科学》杂志上社会科学实验的可重复性。
Nat Hum Behav. 2018 Sep;2(9):637-644. doi: 10.1038/s41562-018-0399-z. Epub 2018 Aug 27.

引用本文的文献

1
Driving innovation: When design collaboration becomes open source, how do reward mechanisms and in-process feedback act as catalysts.推动创新:当设计协作变为开源时,奖励机制和过程中的反馈如何起到催化作用。
PLoS One. 2025 Jul 2;20(7):e0327482. doi: 10.1371/journal.pone.0327482. eCollection 2025.

本文引用的文献

1
A manifesto for reproducible science.可重复科学宣言。
Nat Hum Behav. 2017 Jan 10;1(1):0021. doi: 10.1038/s41562-016-0021.
2
So Useful as a Good Theory? The Practicality Crisis in (Social) Psychological Theory.如此有用的好理论?(社会)心理学理论的实用性危机。
Perspect Psychol Sci. 2021 Jul;16(4):864-874. doi: 10.1177/1745691620969650. Epub 2021 Jan 7.
3
Estimating the deep replicability of scientific findings using human and artificial intelligence.利用人和人工智能估计科学发现的深度可重复性。
Proc Natl Acad Sci U S A. 2020 May 19;117(20):10762-10768. doi: 10.1073/pnas.1909046117. Epub 2020 May 4.
4
Predicting the replicability of social science lab experiments.预测社会科学实验室实验的可重复性。
PLoS One. 2019 Dec 5;14(12):e0225826. doi: 10.1371/journal.pone.0225826. eCollection 2019.
5
How we evaluate your manuscripts.我们如何评估您的稿件。
Nat Hum Behav. 2019 Nov;3(11):1127-1128. doi: 10.1038/s41562-019-0778-0.
6
Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015.评估 2010 年至 2015 年期间《自然》和《科学》杂志上社会科学实验的可重复性。
Nat Hum Behav. 2018 Sep;2(9):637-644. doi: 10.1038/s41562-018-0399-z. Epub 2018 Aug 27.
7
What meta-analyses reveal about the replicability of psychological research.元分析揭示了心理学研究的可重复性。
Psychol Bull. 2018 Dec;144(12):1325-1346. doi: 10.1037/bul0000169. Epub 2018 Oct 15.
8
The MTurkification of Social and Personality Psychology.社会与人格心理学的 MTurk 化。
Pers Soc Psychol Bull. 2019 Jun;45(6):842-850. doi: 10.1177/0146167218798821. Epub 2018 Oct 13.
9
The preregistration revolution.预注册革命。
Proc Natl Acad Sci U S A. 2018 Mar 13;115(11):2600-2606. doi: 10.1073/pnas.1708274114.
10
Meta-assessment of bias in science.科学偏倚的元评估。
Proc Natl Acad Sci U S A. 2017 Apr 4;114(14):3714-3719. doi: 10.1073/pnas.1618569114. Epub 2017 Mar 20.