Auspurg Katrin, Brüderl Josef
Department of Sociology, Ludwig-Maximilians-Universität (LMU) Munich, Munich 80801, Germany.
Proc Natl Acad Sci U S A. 2024 Sep 17;121(38):e2404035121. doi: 10.1073/pnas.2404035121. Epub 2024 Sep 5.
We discuss a relatively new meta-scientific research design: many-analyst studies that attempt to assess the replicability and credibility of research based on large-scale observational data. In these studies, a large number of analysts try to answer the same research question using the same data. The key idea is the greater the variation in results, the greater the uncertainty in answering the research question and, accordingly, the lower the credibility of any individual research finding. Compared to individual replications, the large crowd of analysts allows for a more systematic investigation of uncertainty and its sources. However, many-analyst studies are also resource-intensive, and there are some doubts about their potential to provide credible assessments. We identify three issues that any many-analyst study must address: 1) identifying the source of variation in the results; 2) providing an incentive structure similar to that of standard research; and 3) conducting a proper meta-analysis of the results. We argue that some recent many-analyst studies have failed to address these issues satisfactorily and have therefore provided an overly pessimistic assessment of the credibility of science. We also provide some concrete guidance on how future many-analyst studies could provide a more constructive assessment.
多分析师研究,这类研究试图基于大规模观测数据评估研究的可重复性和可信度。在这些研究中,大量分析师尝试使用相同的数据回答相同的研究问题。关键理念是,结果的差异越大,回答研究问题时的不确定性就越大,相应地,任何单个研究发现的可信度就越低。与单个重复研究相比,众多分析师能对不确定性及其来源进行更系统的调查。然而,多分析师研究也耗费资源,并且人们对其提供可信评估的潜力存在一些疑虑。我们确定了任何多分析师研究都必须解决的三个问题:1)确定结果差异的来源;2)提供与标准研究类似的激励结构;3)对结果进行恰当的元分析。我们认为,近期的一些多分析师研究未能令人满意地解决这些问题,因此对科学可信度给出了过于悲观的评估。我们还就未来多分析师研究如何能提供更具建设性的评估提供了一些具体指导。