Verbakel Jan Y, Turner Philip J, Thompson Matthew J, Plüddemann Annette, Price Christopher P, Shinkins Bethany, Van den Bruel Ann
Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK.
Primary Care Innovation Lab, Department of Family Medicine, University of Washington, Seattle, Washington, USA.
BMJ Open. 2017 Sep 1;7(9):e015760. doi: 10.1136/bmjopen-2016-015760.
Since 2008, the Oxford Diagnostic Horizon Scan Programme has been identifying and summarising evidence on new and emerging diagnostic technologies relevant to primary care. We used these reports to determine the sequence and timing of evidence for new point-of-care diagnostic tests and to identify common evidence gaps in this process.
Systematic overview of diagnostic horizon scan reports.
We obtained the primary studies referenced in each horizon scan report (n=40) and extracted details of the study size, clinical setting and design characteristics. In particular, we assessed whether each study evaluated test accuracy, test impact or cost-effectiveness. The evidence for each point-of-care test was mapped against the Horvath framework for diagnostic test evaluation.
We extracted data from 500 primary studies. Most diagnostic technologies underwent clinical performance (ie, ability to detect a clinical condition) assessment (71.2%), with very few progressing to comparative clinical effectiveness (10.0%) and a cost-effectiveness evaluation (8.6%), even in the more established and frequently reported clinical domains, such as cardiovascular disease. The median time to complete an evaluation cycle was 9 years (IQR 5.5-12.5 years). The sequence of evidence generation was typically haphazard and some diagnostic tests appear to be implemented in routine care without completing essential evaluation stages such as clinical effectiveness.
Evidence generation for new point-of-care diagnostic tests is slow and tends to focus on accuracy, and overlooks other test attributes such as impact, implementation and cost-effectiveness. Evaluation of this dynamic cycle and feeding back data from clinical effectiveness to refine analytical and clinical performance are key to improve the efficiency of point-of-care diagnostic test development and impact on clinically relevant outcomes. While the 'road map' for the steps needed to generate evidence are reasonably well delineated, we provide evidence on the complexity, length and variability of the actual process that many diagnostic technologies undergo.
自2008年以来,牛津诊断前沿扫描计划一直在识别和总结与初级保健相关的新兴诊断技术的证据。我们利用这些报告来确定新的即时诊断测试的证据顺序和时间,并识别这一过程中常见的证据空白。
对诊断前沿扫描报告进行系统综述。
我们获取了每份前沿扫描报告中引用的原始研究(n = 40),并提取了研究规模、临床环境和设计特征的详细信息。特别是,我们评估了每项研究是否评估了测试准确性、测试影响或成本效益。每项即时诊断测试的证据都根据霍瓦特诊断测试评估框架进行了映射。
我们从500项原始研究中提取了数据。大多数诊断技术都进行了临床性能(即检测临床疾病的能力)评估(71.2%),即使在心血管疾病等更成熟且经常报道的临床领域,也很少有技术进入比较临床有效性评估(10.0%)和成本效益评估(8.6%)阶段。完成一个评估周期的中位时间为9年(四分位间距5.5 - 12.5年)。证据生成的顺序通常是随意的,一些诊断测试似乎在未完成诸如临床有效性等基本评估阶段的情况下就被应用于常规护理中。
新的即时诊断测试的证据生成缓慢,且往往侧重于准确性,而忽略了其他测试属性,如影响、实施和成本效益。评估这个动态循环并反馈临床有效性数据以完善分析和临床性能,是提高即时诊断测试开发效率以及对临床相关结局产生影响的关键。虽然生成证据所需步骤的“路线图”已相当明确,但我们提供了关于许多诊断技术实际经历的过程的复杂性、长度和变异性的证据。