Department of Veterinary Diagnostic and Production Animal Medicine, College of Veterinary Medicine, Iowa State University, Ames, Iowa, United States of America.
Independent researcher, Guelph, ON, Canada.
PLoS One. 2018 Jun 28;13(6):e0199441. doi: 10.1371/journal.pone.0199441. eCollection 2018.
Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.
系统评价越来越多地在证据网络中使用来自临床前动物实验的数据。此外,人们越来越努力使系统评价过程的各个方面自动化。在评估临床前实验中的系统偏倚和分析单位错误时,了解研究人员使用的研究设计要素至关重要。这些信息还可以为自动化工作提供信息,以确定最常见的问题。本研究的目的是确定研究人员在临床前研究中使用的设计要素,以便为临床前研究中的偏倚和误差评估提供独特的方面。使用与脑外伤和毒理学相关的 100 个临床前实验,我们评估了研究人员描述的设计要素。我们评估了报告的“方法和材料”部分,以描述以下设计要素:1)使用对照 组,2)干预措施分配给研究单位的单位,3)因素安排,4)将因素分配给研究单位的方法,5)分配和结果评估期间的因素隐藏,6)研究单位的独立性,以及 7)因素的性质。许多研究人员报告说使用了可能导致分析单位错误的设计要素,即描述了对结果的重复测量(94/200)和描述了潜在的伪复制(99/200)。复杂因素安排的使用很常见,有 112 个实验使用了某种形式的析因设计(完全、不完全或类似分块设计)。在毒理学数据集,100 个实验中有 20 个似乎使用了类似分块设计,尽管没有研究人员使用这个术语。重复测量和析因设计的广泛使用意味着理解临床前实验设计中的偏倚和误差可能需要比简单的平行设计更高的专业知识。同样,复杂因素安排的使用为临床前实验中数据提取和偏倚与误差评估的准确自动化带来了新的挑战。