Division of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
Cedars-Sinai Center for Outcomes Research and Education (CS-CORE), Cedars-Sinai Medical Center, Los Angeles, CA, USA.
J Am Med Inform Assoc. 2018 Apr 1;25(4):401-407. doi: 10.1093/jamia/ocx083.
Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance.
We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores.
Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons.
Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance.
Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.
患者使用在线消费者评分来识别表现出色的医生,但这些评分是否是临床绩效的有效衡量标准尚不清楚。我们试图确定来自 5 个平台的专科医生的在线评分是否可以预测护理质量、护理价值和同行评估的医生绩效。
我们对 78 名代表 8 个医学和外科专业的医生进行了一项观察性研究。我们评估了消费者评分与专科特定绩效评分(包括遵守选择明智措施、30 天再入院率、住院时间和调整后的护理费用)、初级保健医生同行评审评分和管理员同行评审评分之间的关联。
在所有评分平台上,多变量模型显示消费者平均评分与专科特定绩效评分(β系数范围为-0.04 至 0.04)、初级保健医生评分(β系数范围为-0.01 至 0.3)和管理员评分(β系数范围为-0.2 至 0.1)之间均无显著关联。评分与强调质量或基于价值的护理的评分子域之间也没有关联。在专科特定绩效评分最低四分位的医生中,只有 5%-32%的医生在所有平台上的评分都处于最低四分位。评分在各平台间保持一致;一名医生在一个平台上的评分显著预测了他/她在另外 10 个比较中的评分。
专科医生的在线评分并不能预测护理质量的客观指标或临床绩效的同行评估。评分在各平台间保持一致,表明它们共同衡量了与绩效无关的潜在结构。
鉴于在线消费者评分与临床绩效的关联较差,不应单独使用这些评分来选择医生。