Sil Annesha, Bespalov Anton, Dalla Christina, Ferland-Beckham Chantelle, Herremans Arnoud, Karantzalos Konstantinos, Kas Martien J, Kokras Nikolaos, Parnham Michael J, Pavlidi Pavlina, Pristouris Kostis, Steckler Thomas, Riedel Gernot, Emmerich Christoph H
Institute of Medical Sciences, University of Aberdeen, Aberdeen, United Kingdom.
PAASP GmbH, Heidelberg, Germany.
Front Behav Neurosci. 2021 Oct 21;15:755812. doi: 10.3389/fnbeh.2021.755812. eCollection 2021.
Laboratory workflows and preclinical models have become increasingly diverse and complex. Confronted with the dilemma of a multitude of information with ambiguous relevance for their specific experiments, scientists run the risk of overlooking critical factors that can influence the planning, conduct and results of studies and that should have been considered . To address this problem, we developed "PEERS" (Platform for the Exchange of Experimental Research Standards), an open-access online platform that is built to aid scientists in determining which experimental factors and variables are most likely to affect the outcome of a specific test, model or assay and therefore ought to be considered during the design, execution and reporting stages. The PEERS database is categorized into and experiments and provides lists of factors derived from scientific literature that have been deemed critical for experimentation. The platform is based on a structured and transparent system for rating the strength of evidence related to each identified factor and its relevance for a specific method/model. In this context, the rating procedure will not solely be limited to the PEERS working group but will also allow for a community-based grading of evidence. We here describe a working prototype using the Open Field paradigm in rodents and present the selection of factors specific to each experimental setup and the rating system. PEERS not only offers users the possibility to search for information to facilitate experimental rigor, but also draws on the engagement of the scientific community to actively expand the information contained within the platform. Collectively, by helping scientists search for specific factors relevant to their experiments, and to share experimental knowledge in a standardized manner, PEERS will serve as a collaborative exchange and analysis tool to enhance data validity and robustness as well as the reproducibility of preclinical research. PEERS offers a vetted, independent tool by which to judge the quality of information available on a certain test or model, identifies knowledge gaps and provides guidance on the key methodological considerations that should be prioritized to ensure that preclinical research is conducted to the highest standards and best practice.
实验室工作流程和临床前模型变得越来越多样化和复杂。面对大量与特定实验相关性模糊的信息所带来的困境,科学家们有可能忽略那些可能影响研究规划、实施和结果且本应予以考虑的关键因素。为解决这一问题,我们开发了“PEERS”(实验研究标准交流平台),这是一个开放获取的在线平台,旨在帮助科学家确定哪些实验因素和变量最有可能影响特定测试、模型或分析的结果,因此在设计、执行和报告阶段都应予以考虑。PEERS数据库分为 和 实验,并提供从科学文献中得出的、被认为对实验至关重要的因素列表。该平台基于一个结构化且透明的系统,用于评估与每个已识别因素相关的证据强度及其与特定方法/模型的相关性。在这种情况下,评级程序不仅限于PEERS工作组,还将允许基于社区的证据分级。我们在此描述一个使用啮齿动物旷场范式的工作原型,并展示针对每个实验设置的特定因素选择和评级系统。PEERS不仅为用户提供搜索信息以促进实验严谨性的可能性,还借助科学界的参与来积极扩展平台内包含的信息。总体而言,通过帮助科学家搜索与其实验相关的特定因素,并以标准化方式分享实验知识,PEERS将作为一个协作交流和分析工具,以提高数据的有效性和稳健性以及临床前研究的可重复性。PEERS提供了一个经过审核的独立工具,用于判断关于特定测试或模型的可用信息的质量,识别知识差距,并就应优先考虑的关键方法学考量提供指导,以确保临床前研究按照最高标准和最佳实践进行。