Suppr超能文献

风险预测中反事实公平性的交叉性框架。

An intersectional framework for counterfactual fairness in risk prediction.

作者信息

Wastvedt Solvejg, Huling Jared D, Wolfson Julian

机构信息

Division of Biostatistics, University of Minnesota, 420 Delaware St SE, Minneapolis, MN 55455, USA.

出版信息

Biostatistics. 2024 Jul 1;25(3):702-717. doi: 10.1093/biostatistics/kxad021.

Abstract

Along with the increasing availability of health data has come the rise of data-driven models to inform decision making and policy. These models have the potential to benefit both patients and health care providers but can also exacerbate health inequities. Existing "algorithmic fairness" methods for measuring and correcting model bias fall short of what is needed for health policy in two key ways. First, methods typically focus on a single grouping along which discrimination may occur rather than considering multiple, intersecting groups. Second, in clinical applications, risk prediction is typically used to guide treatment, creating distinct statistical issues that invalidate most existing techniques. We present novel unfairness metrics that address both challenges. We also develop a complete framework of estimation and inference tools for our metrics, including the unfairness value ("u-value"), used to determine the relative extremity of unfairness, and standard errors and confidence intervals employing an alternative to the standard bootstrap. We demonstrate application of our framework to a COVID-19 risk prediction model deployed in a major Midwestern health system.

摘要

随着健康数据的可得性不断提高,数据驱动的模型应运而生,为决策和政策提供信息。这些模型有可能使患者和医疗服务提供者都受益,但也可能加剧健康不平等。现有的用于测量和纠正模型偏差的“算法公平性”方法在两个关键方面未能满足健康政策的需求。首先,这些方法通常只关注可能发生歧视的单一分组,而不是考虑多个相互交叉的群体。其次,在临床应用中,风险预测通常用于指导治疗,这就产生了独特的统计问题,使大多数现有技术失效。我们提出了能够应对这两个挑战的新型不公平性指标。我们还为我们的指标开发了一个完整的估计和推理工具框架,包括用于确定不公平程度相对极端情况的不公平值(“u值”),以及采用替代标准自助法的标准误差和置信区间。我们展示了我们的框架在中西部一个主要卫生系统中部署的COVID-19风险预测模型中的应用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验