Suppr超能文献

一个用于测试基于事件的实验的标准化框架。

A standardized framework to test event-based experiments.

机构信息

Neural Circuits, Consciousness and Cognition Research Group, Max Planck Institute of Empirical Aesthetics, Frankfurt am Main, Germany.

Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, 6500 HB, the Netherlands.

出版信息

Behav Res Methods. 2024 Dec;56(8):8852-8868. doi: 10.3758/s13428-024-02508-y. Epub 2024 Sep 16.

Abstract

The replication crisis in experimental psychology and neuroscience has received much attention recently. This has led to wide acceptance of measures to improve scientific practices, such as preregistration and registered reports. Less effort has been devoted to performing and reporting the results of systematic tests of the functioning of the experimental setup itself. Yet, inaccuracies in the performance of the experimental setup may affect the results of a study, lead to replication failures, and importantly, impede the ability to integrate results across studies. Prompted by challenges we experienced when deploying studies across six laboratories collecting electroencephalography (EEG)/magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and intracranial EEG (iEEG), here we describe a framework for both testing and reporting the performance of the experimental setup. In addition, 100 researchers were surveyed to provide a snapshot of current common practices and community standards concerning testing in published experiments' setups. Most researchers reported testing their experimental setups. Almost none, however, published the tests performed or their results. Tests were diverse, targeting different aspects of the setup. Through simulations, we clearly demonstrate how even slight inaccuracies can impact the final results. We end with a standardized, open-source, step-by-step protocol for testing (visual) event-related experiments, shared via protocols.io. The protocol aims to provide researchers with a benchmark for future replications and insights into the research quality to help improve the reproducibility of results, accelerate multicenter studies, increase robustness, and enable integration across studies.

摘要

实验心理学和神经科学中的复制危机最近受到了广泛关注。这导致人们广泛接受了一些措施来改进科学实践,例如预先注册和注册报告。然而,对于实验设置本身功能的系统测试的执行和报告却没有得到太多关注。实验设置的不准确可能会影响研究结果,导致复制失败,更重要的是,阻碍跨研究整合结果的能力。在我们经历了在六个实验室部署研究的挑战之后,这些研究分别收集脑电图 (EEG)/脑磁图 (MEG)、功能磁共振成像 (fMRI) 和颅内 EEG (iEEG),促使我们提出了一个用于测试和报告实验设置性能的框架。此外,我们对 100 名研究人员进行了调查,以了解当前关于已发表实验设置中测试的常见实践和社区标准。大多数研究人员报告称他们测试了自己的实验设置。然而,几乎没有人发布他们所进行的测试或结果。测试方法多样,针对设置的不同方面。通过模拟,我们清楚地展示了即使是微小的不准确也会如何影响最终结果。最后,我们提供了一个标准化的、开源的、逐步的测试(视觉)事件相关实验的方案,通过 protocols.io 共享。该方案旨在为研究人员提供未来复制的基准,并深入了解研究质量,以帮助提高结果的可重复性,加速多中心研究,提高稳健性,并实现跨研究的整合。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验