Hofer T P, Hayward R A
Veterans Affairs Health Services Research and Development Program, Ann Arbor, MI, USA.
Med Care. 1996 Aug;34(8):737-53. doi: 10.1097/00005650-199608000-00002.
Many groups involved in health care are very interested in using external quality indices, such as risk-adjusted mortality rates, to examine hospital quality. The authors evaluated the feasibility of using mortality rates for medical diagnoses to identify poor-quality hospitals.
The Monte Carlo simulation model was used to examine whether mortality rates could distinguish 172 average-quality hospitals from 19 poor-quality hospitals (5% versus 25% of deaths being preventable, respectively), using the largest diagnosis-related groups (DRGs) for cardiac, gastrointestinal, cerebrovascular, and pulmonary diseases as well as an aggregate of all medical DRGs. Discharge counts and observed death rates for all 191 Michigan hospitals were obtained from the Michigan Inpatient Database. Positive predictive value (PPV), sensitivity, and area under the receiver operating characteristic curve were calculated for mortality outlier status as an indicator of poor-quality hospitals. Sensitivity analysis was performed under varying assumptions about the time period of evaluation, quality differences between hospitals, and unmeasured variability in hospital casemix.
For individual DRG groups, mortality rates were a poor measure of quality, even using the optimistic assumption of perfect casemix adjustment. For acute myocardial infarction, high mortality rate outlier status (using 2 years of data and a 0.05 probability cutoff) had a PPV of only 24%, thus, more than three fourths of those labeled poor-quality hospitals (high mortality rate outliers) actually would have average quality. If we aggregate all medical DRGs and continue to assume very large quality differences and perfect casemix adjustment, the sensitivity for detecting poor-quality hospitals is 35% and PPV is 52%. Even for this extreme case, the PPV is very sensitive to introduction of small amounts of unmeasured casemix differences between hospitals.
Although they may be useful for some surgical diagnoses, DRG-specific hospital mortality rates probably cannot accurately detect poor-quality outliers for medical diagnoses. Even collapsing to all medical DRGs, hospital mortality rates seem unlikely to be accurate predictors of poor quality, and punitive measures based on high mortality rates frequently would penalize good or average hospitals.
许多参与医疗保健的团体对使用外部质量指标(如风险调整死亡率)来评估医院质量非常感兴趣。作者评估了使用医疗诊断死亡率来识别低质量医院的可行性。
采用蒙特卡罗模拟模型,利用心脏、胃肠、脑血管和肺部疾病的最大诊断相关组(DRG)以及所有医疗DRG的汇总数据,检验死亡率能否区分172家平均质量医院和19家低质量医院(可预防死亡分别占5%和25%)。从密歇根住院数据库获取了密歇根州所有191家医院的出院人数和观察到的死亡率。计算了作为低质量医院指标的死亡率异常状态的阳性预测值(PPV)、敏感性和受试者工作特征曲线下面积。在关于评估时间段、医院之间的质量差异以及医院病例组合中未测量的变异性的不同假设下进行了敏感性分析。
对于单个DRG组,即使采用病例组合完全调整的乐观假设,死亡率也不是衡量质量的好指标。对于急性心肌梗死,高死亡率异常状态(使用2年数据和0.05的概率临界值)的PPV仅为24%,因此,被标记为低质量医院(高死亡率异常值)的医院中,超过四分之三实际上质量平均。如果我们汇总所有医疗DRG,并继续假设质量差异非常大且病例组合完全调整,检测低质量医院的敏感性为35%,PPV为52%。即使对于这种极端情况,PPV对医院之间少量未测量的病例组合差异的引入也非常敏感。
虽然DRG特定的医院死亡率可能对某些外科诊断有用,但可能无法准确检测医疗诊断中的低质量异常值。即使合并为所有医疗DRG,医院死亡率似乎也不太可能成为低质量的准确预测指标,基于高死亡率的惩罚措施往往会惩罚优质或平均水平的医院。