Optum Labs, Minnetonka, MN, USA.
IQVIA Government Solutions, Cambridge, MA, USA.
Med Care Res Rev. 2023 Apr;80(2):216-227. doi: 10.1177/10775587221098831. Epub 2022 Jun 9.
There is growing interest in ensuring equity and guarding against bias in the use of risk scores produced by machine learning and artificial intelligence models. Risk scores are used to select patients who will receive outreach and support. Inappropriate use of risk scores, however, can perpetuate disparities. Commonly advocated solutions to improve equity are nontrivial to implement and may not pass legal scrutiny. In this article, we introduce pragmatic tools that support better use of risk scores for more equitable outreach programs. Our model output charts allow modeling and care management teams to see the equity consequences of different threshold choices and to select the optimal risk thresholds to trigger outreach. For best results, as with any health equity tool, we recommend that these charts be used by a diverse team and shared with relevant stakeholders.
人们越来越关注确保机器学习和人工智能模型生成的风险评分的公平性和防范偏差。风险评分用于选择将接受外展和支持的患者。然而,风险评分的不当使用可能会使差异永久化。为了提高公平性,人们普遍提倡的解决方案实施起来并不简单,而且可能无法通过法律审查。在本文中,我们介绍了实用工具,这些工具支持更好地使用风险评分来实现更公平的外展计划。我们的模型输出图表允许建模和护理管理团队看到不同阈值选择的公平性后果,并选择最佳的风险阈值来触发外展。为了获得最佳效果,与任何健康公平工具一样,我们建议由多元化的团队使用这些图表,并与相关利益相关者共享。