Luijken K, Lohmann A, Alter U, Claramunt Gonzalez J, Clouth F J, Fossum J L, Hesen L, Huizing A H J, Ketelaar J, Montoya A K, Nab L, Nijman R C C, Penning de Vries B B L, Tibbe T D, Wang Y A, Groenwold R H H
Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands.
Department of Epidemiology, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands.
R Soc Open Sci. 2024 Jan 17;11(1):231003. doi: 10.1098/rsos.231003. eCollection 2024 Jan.
Results of simulation studies evaluating the performance of statistical methods can have a major impact on the way empirical research is implemented. However, so far there is limited evidence of the replicability of simulation studies. Eight highly cited statistical simulation studies were selected, and their replicability was assessed by teams of replicators with formal training in quantitative methodology. The teams used information in the original publications to write simulation code with the aim of replicating the results. The primary outcome was to determine the feasibility of replicability based on reported information in the original publications and supplementary materials. Replicasility varied greatly: some original studies provided detailed information leading to almost perfect replication of results, whereas other studies did not provide enough information to implement any of the reported simulations. Factors facilitating replication included availability of code, detailed reporting or visualization of data-generating procedures and methods, and replicator expertise. Replicability of statistical simulation studies was mainly impeded by lack of information and sustainability of information sources. We encourage researchers publishing simulation studies to transparently report all relevant implementation details either in the research paper itself or in easily accessible supplementary material and to make their simulation code publicly available using permanent links.
评估统计方法性能的模拟研究结果可能会对实证研究的实施方式产生重大影响。然而,到目前为止,关于模拟研究可重复性的证据有限。我们挑选了八项被高度引用的统计模拟研究,并由在定量方法方面接受过正规培训的复制团队对其可重复性进行评估。这些团队利用原始出版物中的信息编写模拟代码,旨在复制研究结果。主要结果是根据原始出版物和补充材料中报告的信息来确定可重复性的可行性。可重复性差异很大:一些原始研究提供了详细信息,使得结果几乎能完美复制,而其他研究则没有提供足够信息来实施所报告的任何模拟。促进复制的因素包括代码可用性、数据生成程序和方法的详细报告或可视化,以及复制者的专业知识。统计模拟研究的可重复性主要受到信息缺乏和信息来源可持续性的阻碍。我们鼓励发表模拟研究的研究人员在研究论文本身或易于获取的补充材料中透明地报告所有相关实施细节,并使用永久链接将其模拟代码公开。