Organization for Psychoeducational Tutoring, 205 Willard Way, Ithaca, NY, 14850, USA.
BMC Med Res Methodol. 2021 Jan 5;21(1):3. doi: 10.1186/s12874-020-01191-9.
Randomized controlled trials are ubiquitously spoken of as the "gold standard" for testing interventions and establishing causal relations. This article presents evidence for two premises. First: there are often major problems with randomized designs; it is by no means true that the only good design is a randomized design. Second: the method of virtual controls in some circumstances can and should replace randomized designs.Randomized trials can present problems with external validity or generalizability; they can be unethical; they typically involve much time, effort, and expense; their assignments to treatment conditions often can be maintained only for limited time periods; examination of their track record reveals problems with reproducibility on the one hand, and lack of overwhelming superiority to observational methods on the other hand.The method of virtual controls involves ongoing efforts to refine statistical models for prediction of outcomes from measurable variables, under conditions of no treatment or current standard of care. Research participants then join a single-arm study of a new intervention. Each participant's data, together with the formulas previously generated, predict that participant's outcome without the new intervention. These outcomes are the "virtual controls." The actual outcomes with intervention are compared with the virtual control outcomes to estimate effect sizes. Part of the research product is the prediction equations themselves, so that in clinical practice, individual treatment decisions may be aided by quantitative answers to the questions, "What is estimated to happen to this particular patient with and without this treatment?"The method of virtual controls is especially indicated when rapid results are of high priority, when withholding intervention is likely harmful, when adequate data exist for prediction of untreated or standard of care outcomes, when we want to let people choose the treatment they prefer, when tailoring treatment decisions to individuals is desirable, and when real-world clinical information can be harnessed for analysis.
随机对照试验被普遍认为是检验干预措施和建立因果关系的“金标准”。本文提出了两个前提的证据。首先:随机设计往往存在重大问题;并非只有随机设计才是好的设计。其次:在某些情况下,虚拟对照法可以而且应该替代随机设计。随机试验可能存在外部有效性或可推广性的问题;它们可能不道德;它们通常需要大量的时间、精力和费用;它们对治疗条件的分配通常只能维持有限的时间;对其记录的审查一方面揭示了可重复性的问题,另一方面也揭示了缺乏对观察方法的压倒性优势。虚拟对照法涉及持续努力改进用于预测可测量变量结果的统计模型,在无治疗或当前标准治疗的情况下。然后,研究参与者加入新干预措施的单臂研究。每个参与者的数据,以及之前生成的公式,共同预测该参与者在没有新干预措施的情况下的结果。这些结果是“虚拟对照”。实际干预结果与虚拟对照结果进行比较,以估计效果大小。研究成果的一部分是预测方程本身,因此在临床实践中,个体治疗决策可以通过定量回答以下问题得到帮助:“对于这个特定的患者,在有和没有这种治疗的情况下,预计会发生什么?”虚拟对照法特别适用于以下情况:快速获得结果至关重要;不进行干预可能有害;存在足够的数据来预测未治疗或标准治疗的结果;我们希望让人们选择他们喜欢的治疗方法;希望根据个体情况量身定制治疗决策;并且可以利用现实世界的临床信息进行分析。