Scott I, Youlden D, Coory M
Department of Internal Medicine, Princess Alexandra Hospital, Brisbane, Queensland, Australia 4102.
Qual Saf Health Care. 2004 Feb;13(1):32-9. doi: 10.1136/qshc.2002.003996.
Hospital performance reports based on administrative data should distinguish differences in quality of care between hospitals from case mix related variation and random error effects. A study was undertaken to determine which of 12 diagnosis-outcome indicators measured across all hospitals in one state had significant risk adjusted systematic (or special cause) variation (SV) suggesting differences in quality of care. For those that did, we determined whether SV persists within hospital peer groups, whether indicator results correlate at the individual hospital level, and how many adverse outcomes would be avoided if all hospitals achieved indicator values equal to the best performing 20% of hospitals.
All patients admitted during a 12 month period to 180 acute care hospitals in Queensland, Australia with heart failure (n = 5745), acute myocardial infarction (AMI) (n = 3427), or stroke (n = 2955) were entered into the study. Outcomes comprised in-hospital deaths, long hospital stays, and 30 day readmissions. Regression models produced standardised, risk adjusted diagnosis specific outcome event ratios for each hospital. Systematic and random variation in ratio distributions for each indicator were then apportioned using hierarchical statistical models.
Only five of 12 (42%) diagnosis-outcome indicators showed significant SV across all hospitals (long stays and same diagnosis readmissions for heart failure; in-hospital deaths and same diagnosis readmissions for AMI; and in-hospital deaths for stroke). Significant SV was only seen for two indicators within hospital peer groups (same diagnosis readmissions for heart failure in tertiary hospitals and inhospital mortality for AMI in community hospitals). Only two pairs of indicators showed significant correlation. If all hospitals emulated the best performers, at least 20% of AMI and stroke deaths, heart failure long stays, and heart failure and AMI readmissions could be avoided.
Diagnosis-outcome indicators based on administrative data require validation as markers of significant risk adjusted SV. Validated indicators allow quantification of realisable outcome benefits if all hospitals achieved best performer levels. The overall level of quality of care within single institutions cannot be inferred from the results of one or a few indicators.
基于行政数据的医院绩效报告应区分医院之间护理质量的差异与病例组合相关变异和随机误差效应。开展了一项研究,以确定在一个州的所有医院中测量的12个诊断-结果指标中,哪些指标存在显著的风险调整后系统(或特殊原因)变异(SV),这表明护理质量存在差异。对于存在显著SV的指标,我们确定了SV是否在医院同类组中持续存在、指标结果在个体医院层面是否相关,以及如果所有医院的指标值都达到表现最佳的20%的医院的水平,可避免多少不良结局。
将澳大利亚昆士兰州180家急性护理医院在12个月期间收治的所有心力衰竭患者(n = 5745)、急性心肌梗死(AMI)患者(n = 3427)或中风患者(n = 2955)纳入研究。结局包括住院死亡、住院时间延长和30天再入院。回归模型为每家医院生成标准化的、风险调整后的特定诊断结局事件比率。然后使用分层统计模型对每个指标比率分布中的系统变异和随机变异进行分配。
12个诊断-结果指标中只有5个(42%)在所有医院中显示出显著的SV(心力衰竭患者的住院时间延长和同诊断再入院;AMI患者的住院死亡和同诊断再入院;中风患者的住院死亡)。在医院同类组中,只有两个指标显示出显著的SV(三级医院心力衰竭患者的同诊断再入院和社区医院AMI患者的住院死亡率)。只有两对指标显示出显著相关性。如果所有医院都效仿表现最佳的医院,至少20%的AMI和中风死亡、心力衰竭患者的住院时间延长以及心力衰竭和AMI患者的再入院可以避免。
基于行政数据的诊断-结果指标需要作为显著风险调整后SV的标志物进行验证。经过验证的指标可以量化如果所有医院都达到最佳表现水平可实现的结局效益。不能从一个或几个指标的结果推断单个机构内护理质量的总体水平。