Mackin Shaina, Major Vincent J, Chunara Rumi, Newton-Dame Remle
Office of Population Health, New York City Health + Hospitals, New York, NY, USA.
Department of Population Health, NYU Grossman School of Medicine, New York, NY, USA.
NPJ Digit Med. 2025 Jun 5;8(1):335. doi: 10.1038/s41746-025-01732-w.
Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record: one predicting acute visits for asthma and one predicting unplanned readmissions. We evaluated differences in subgroup performance across race/ethnicity, sex, language, and insurance using equal opportunity difference (EOD), a metric comparing false negative rates. The most biased classes (race/ethnicity for asthma, insurance for readmission) were targeted for mitigation using threshold adjustment, which adjusts subgroup thresholds to minimize EOD, and reject option classification, which re-classifies scores near the threshold by subgroup. Successful mitigation was defined as 1) absolute subgroup EODs <5 percentage points, 2) accuracy reduction <10%, and 3) alert rate change <20%. Threshold adjustment met these criteria; reject option classification did not. We introduce a Supplementary Playbook outlining our approach for low-resource bias mitigation.
当预测模型在社会人口统计学类别中的表现存在显著差异时,就会出现算法偏差,这会加剧系统性医疗保健差距。纽约市卫生 + 医院是一个城市安全网系统,评估了我们电子病历中两个二元分类模型的偏差:一个预测哮喘急性就诊,另一个预测非计划再入院。我们使用机会均等差异(EOD)评估了种族/民族、性别、语言和保险等亚组表现的差异,EOD是一种比较假阴性率的指标。最具偏差的类别(哮喘的种族/民族、再入院的保险)通过阈值调整和拒绝选项分类进行偏差缓解,阈值调整是调整亚组阈值以最小化EOD,拒绝选项分类是按亚组对接近阈值的分数进行重新分类。成功缓解的定义为:1)绝对亚组EOD <5个百分点,2)准确率降低<10%,3)警报率变化<20%。阈值调整符合这些标准;拒绝选项分类则不符合。我们引入了一个补充手册,概述了我们在资源有限情况下减轻偏差的方法。