• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

识别并减轻安全网中的算法偏差。

Identifying and mitigating algorithmic bias in the safety net.

作者信息

Mackin Shaina, Major Vincent J, Chunara Rumi, Newton-Dame Remle

机构信息

Office of Population Health, New York City Health + Hospitals, New York, NY, USA.

Department of Population Health, NYU Grossman School of Medicine, New York, NY, USA.

出版信息

NPJ Digit Med. 2025 Jun 5;8(1):335. doi: 10.1038/s41746-025-01732-w.

DOI:10.1038/s41746-025-01732-w
PMID:40473916
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12141433/
Abstract

Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record: one predicting acute visits for asthma and one predicting unplanned readmissions. We evaluated differences in subgroup performance across race/ethnicity, sex, language, and insurance using equal opportunity difference (EOD), a metric comparing false negative rates. The most biased classes (race/ethnicity for asthma, insurance for readmission) were targeted for mitigation using threshold adjustment, which adjusts subgroup thresholds to minimize EOD, and reject option classification, which re-classifies scores near the threshold by subgroup. Successful mitigation was defined as 1) absolute subgroup EODs <5 percentage points, 2) accuracy reduction <10%, and 3) alert rate change <20%. Threshold adjustment met these criteria; reject option classification did not. We introduce a Supplementary Playbook outlining our approach for low-resource bias mitigation.

摘要

当预测模型在社会人口统计学类别中的表现存在显著差异时,就会出现算法偏差,这会加剧系统性医疗保健差距。纽约市卫生 + 医院是一个城市安全网系统,评估了我们电子病历中两个二元分类模型的偏差:一个预测哮喘急性就诊,另一个预测非计划再入院。我们使用机会均等差异(EOD)评估了种族/民族、性别、语言和保险等亚组表现的差异,EOD是一种比较假阴性率的指标。最具偏差的类别(哮喘的种族/民族、再入院的保险)通过阈值调整和拒绝选项分类进行偏差缓解,阈值调整是调整亚组阈值以最小化EOD,拒绝选项分类是按亚组对接近阈值的分数进行重新分类。成功缓解的定义为:1)绝对亚组EOD <5个百分点,2)准确率降低<10%,3)警报率变化<20%。阈值调整符合这些标准;拒绝选项分类则不符合。我们引入了一个补充手册,概述了我们在资源有限情况下减轻偏差的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7696/12141433/7ce80594486d/41746_2025_1732_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7696/12141433/7ce80594486d/41746_2025_1732_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7696/12141433/7ce80594486d/41746_2025_1732_Fig1_HTML.jpg

相似文献

1
Identifying and mitigating algorithmic bias in the safety net.识别并减轻安全网中的算法偏差。
NPJ Digit Med. 2025 Jun 5;8(1):335. doi: 10.1038/s41746-025-01732-w.
2
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
3
Mitigating disparities in breast cancer treatment at an academic safety-net hospital.在学术型保障医院减轻乳腺癌治疗中的差异。
Breast Cancer Res Treat. 2023 Apr;198(3):597-606. doi: 10.1007/s10549-023-06875-6. Epub 2023 Feb 24.
4
Evaluating and mitigating bias in machine learning models for cardiovascular disease prediction.评估和减轻心血管疾病预测机器学习模型中的偏差。
J Biomed Inform. 2023 Feb;138:104294. doi: 10.1016/j.jbi.2023.104294. Epub 2023 Jan 24.
5
6
Assessing racial bias in healthcare predictive models: Practical lessons from an empirical evaluation of 30-day hospital readmission models.评估医疗保健预测模型中的种族偏见:来自对 30 天内医院再入院模型的实证评估的实践经验。
J Biomed Inform. 2024 Aug;156:104683. doi: 10.1016/j.jbi.2024.104683. Epub 2024 Jun 24.
7
Mitigating Algorithmic Bias in AI-Driven Cardiovascular Imaging for Fairer Diagnostics.减轻人工智能驱动的心血管成像中的算法偏差以实现更公平的诊断。
Diagnostics (Basel). 2024 Nov 27;14(23):2675. doi: 10.3390/diagnostics14232675.
8
Evaluating algorithmic bias on biomarker classification of breast cancer pathology reports.评估算法偏差对乳腺癌病理报告生物标志物分类的影响。
JAMIA Open. 2025 May 9;8(3):ooaf033. doi: 10.1093/jamiaopen/ooaf033. eCollection 2025 Jun.
9
Sociodemographic predictors of imaging utilization in children with right lower quadrant pain.右下象限疼痛儿童影像学检查利用情况的社会人口学预测因素
Pediatr Radiol. 2025 Jan;55(1):159-172. doi: 10.1007/s00247-024-06076-3. Epub 2024 Nov 6.
10
Evaluating Algorithmic Bias in 30-Day Hospital Readmission Models: Retrospective Analysis.评估 30 天内医院再入院模型中的算法偏差:回顾性分析。
J Med Internet Res. 2024 Apr 18;26:e47125. doi: 10.2196/47125.

引用本文的文献

1
The ethics of data mining in healthcare: challenges, frameworks, and future directions.医疗保健领域数据挖掘的伦理问题:挑战、框架及未来方向。
BioData Min. 2025 Jul 11;18(1):47. doi: 10.1186/s13040-025-00461-w.

本文引用的文献

1
Trust in Physicians and Hospitals During the COVID-19 Pandemic in a 50-State Survey of US Adults.在对美国成年人进行的一项涵盖 50 个州的调查中,了解他们在 COVID-19 大流行期间对医生和医院的信任度。
JAMA Netw Open. 2024 Jul 1;7(7):e2424984. doi: 10.1001/jamanetworkopen.2024.24984.
2
Unveiling the Influence of AI Predictive Analytics on Patient Outcomes: A Comprehensive Narrative Review.揭示人工智能预测分析对患者预后的影响:一项全面的叙述性综述。
Cureus. 2024 May 9;16(5):e59954. doi: 10.7759/cureus.59954. eCollection 2024 May.
3
Receiver operating characteristic curve analysis in diagnostic accuracy studies: A guide to interpreting the area under the curve value.
诊断准确性研究中的受试者工作特征曲线分析:曲线下面积值解读指南。
Turk J Emerg Med. 2023 Oct 3;23(4):195-198. doi: 10.4103/tjem.tjem_182_23. eCollection 2023 Oct-Dec.
4
A translational perspective towards clinical AI fairness.临床人工智能公平性的转化视角。
NPJ Digit Med. 2023 Sep 14;6(1):172. doi: 10.1038/s41746-023-00918-4.
5
Multidisciplinary considerations of fairness in medical AI: A scoping review.医疗人工智能公平性的多学科思考:范围综述。
Int J Med Inform. 2023 Oct;178:105175. doi: 10.1016/j.ijmedinf.2023.105175. Epub 2023 Aug 8.
6
Algorithmic fairness in artificial intelligence for medicine and healthcare.人工智能在医学和医疗保健中的算法公平性。
Nat Biomed Eng. 2023 Jun;7(6):719-742. doi: 10.1038/s41551-023-01056-8. Epub 2023 Jun 28.
7
Differential Fairness: An Intersectional Framework for Fair AI.差异公平性:公平人工智能的交叉性框架。
Entropy (Basel). 2023 Apr 14;25(4):660. doi: 10.3390/e25040660.
8
A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health.呼吁重视并减轻人工智能应用于精神健康领域中的偏见
Perspect Psychol Sci. 2023 Sep;18(5):1062-1096. doi: 10.1177/17456916221134490. Epub 2022 Dec 9.
9
Developing a Model to Predict High Health Care Utilization Among Patients in a New York City Safety Net System.开发一个模型,预测纽约市医疗保障系统中高医疗利用率患者。
Med Care. 2023 Feb 1;61(2):102-108. doi: 10.1097/MLR.0000000000001807. Epub 2022 Dec 6.
10
Algorithmic fairness in computational medicine.计算医学中的算法公平性。
EBioMedicine. 2022 Oct;84:104250. doi: 10.1016/j.ebiom.2022.104250. Epub 2022 Sep 6.