Coots Madison, Linn Kristin A, Goel Sharad, Navathe Amol S, Parikh Ravi B
Harvard Kennedy School, Harvard University, Cambridge, Massachusetts, USA.
The Parity Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA.
Annu Rev Public Health. 2025 Apr;46(1):507-523. doi: 10.1146/annurev-publhealth-071823-112058. Epub 2024 Dec 3.
Among health care researchers, there is increasing debate over how best to assess and ensure the fairness of algorithms used for clinical decision support and population health, particularly concerning potential racial bias. Here we first distill concerns over the fairness of health care algorithms into four broad categories: () the explicit inclusion (or, conversely, the exclusion) of race and ethnicity in algorithms, () unequal algorithm decision rates across groups, () unequal error rates across groups, and () potential bias in the target variable used in prediction. With this taxonomy, we critically examine seven prominent and controversial health care algorithms. We show that popular approaches that aim to improve the fairness of health care algorithms can in fact worsen outcomes for individuals across all racial and ethnic groups. We conclude by offering an alternative, consequentialist framework for algorithm design that mitigates these harms by instead foregrounding outcomes and clarifying trade-offs in the pursuit of equitable decision-making.
在医疗保健研究人员中,关于如何最好地评估和确保用于临床决策支持和人群健康的算法的公平性,尤其是潜在的种族偏见问题,存在着越来越多的争论。在这里,我们首先将对医疗保健算法公平性的担忧归纳为四大类:(1)算法中明确纳入(或相反,排除)种族和族裔因素;(2)不同群体间算法决策率不平等;(3)不同群体间错误率不平等;(4)预测中使用的目标变量存在潜在偏差。基于这种分类法,我们批判性地审视了七种著名且有争议的医疗保健算法。我们表明,旨在提高医疗保健算法公平性的流行方法实际上可能会使所有种族和族裔群体中个人的结果变得更糟。我们最后提出了一种用于算法设计的替代的、结果主义框架,该框架通过突出结果并在追求公平决策过程中阐明权衡取舍来减轻这些危害。