Suppr超能文献

在线医生评级未能预测实际的质量、价值和同行评审措施的表现。

Online physician ratings fail to predict actual performance on measures of quality, value, and peer review.

机构信息

Division of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA.

Cedars-Sinai Center for Outcomes Research and Education (CS-CORE), Cedars-Sinai Medical Center, Los Angeles, CA, USA.

出版信息

J Am Med Inform Assoc. 2018 Apr 1;25(4):401-407. doi: 10.1093/jamia/ocx083.

Abstract

OBJECTIVE

Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance.

MATERIALS AND METHODS

We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores.

RESULTS

Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons.

DISCUSSION

Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance.

CONCLUSION

Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.

摘要

目的

患者使用在线消费者评分来识别表现出色的医生,但这些评分是否是临床绩效的有效衡量标准尚不清楚。我们试图确定来自 5 个平台的专科医生的在线评分是否可以预测护理质量、护理价值和同行评估的医生绩效。

材料和方法

我们对 78 名代表 8 个医学和外科专业的医生进行了一项观察性研究。我们评估了消费者评分与专科特定绩效评分(包括遵守选择明智措施、30 天再入院率、住院时间和调整后的护理费用)、初级保健医生同行评审评分和管理员同行评审评分之间的关联。

结果

在所有评分平台上,多变量模型显示消费者平均评分与专科特定绩效评分(β系数范围为-0.04 至 0.04)、初级保健医生评分(β系数范围为-0.01 至 0.3)和管理员评分(β系数范围为-0.2 至 0.1)之间均无显著关联。评分与强调质量或基于价值的护理的评分子域之间也没有关联。在专科特定绩效评分最低四分位的医生中,只有 5%-32%的医生在所有平台上的评分都处于最低四分位。评分在各平台间保持一致;一名医生在一个平台上的评分显著预测了他/她在另外 10 个比较中的评分。

讨论

专科医生的在线评分并不能预测护理质量的客观指标或临床绩效的同行评估。评分在各平台间保持一致,表明它们共同衡量了与绩效无关的潜在结构。

结论

鉴于在线消费者评分与临床绩效的关联较差,不应单独使用这些评分来选择医生。

相似文献

引用本文的文献

9
How Referring Providers Choose Specialists for Their Patients: a Systematic Review.介绍医生如何为患者选择专家:系统评价。
J Gen Intern Med. 2022 Oct;37(13):3444-3452. doi: 10.1007/s11606-022-07574-6. Epub 2022 Apr 19.

本文引用的文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验