Pfizer Inc., 219 East 42nd Street, New York, NY 10017, USA.
J Comp Eff Res. 2014 Jan;3(1):79-93. doi: 10.2217/cer.13.84.
The scope of comparative effectiveness research (CER) is wide and therefore requires the application of complex statistical tools and nonstandard procedures. The commonly used methods presuppose the realization of important, and often untestable, assumptions pertaining to the underlying distribution, study heterogeneity and targeted population. Accordingly, the value of the results obtained based on such tools is in large part dependent on the validity of the underlying assumptions relating to the operating characteristics of the procedures. In this article, we elucidate some of the pitfalls that may arise with use of the most commonly used techniques, including those that are applied in network meta-analysis, observational data analysis and patient-reported outcome evaluation. In addition, reference is made to the impact of data quality and database heterogeneity on the performance of commonly used CER tools and the need for standards in order to inform researchers engaged in CER.
比较效果研究(CER)的范围很广,因此需要应用复杂的统计工具和非标准程序。常用的方法假定实现了重要的、通常未经检验的假设,这些假设与潜在分布、研究异质性和目标人群有关。因此,基于这些工具获得的结果的价值在很大程度上取决于与程序操作特征相关的基本假设的有效性。在本文中,我们阐明了在使用最常用的技术时可能出现的一些陷阱,包括在网络荟萃分析、观察性数据分析和患者报告结局评估中应用的技术。此外,还提到了数据质量和数据库异质性对常用 CER 工具性能的影响,以及为了告知从事 CER 的研究人员而制定标准的必要性。