• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

疫情预测中的算法公平性:来自新冠疫情的教训

Algorithmic fairness in pandemic forecasting: lessons from COVID-19.

作者信息

Tsai Thomas C, Arik Sercan, Jacobson Benjamin H, Yoon Jinsung, Yoder Nate, Sava Dario, Mitchell Margaret, Graham Garth, Pfister Tomas

机构信息

Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, MA, USA.

Department of Surgery, Brigham and Women's Hospital, Boston, MA, USA.

出版信息

NPJ Digit Med. 2022 May 10;5(1):59. doi: 10.1038/s41746-022-00602-z.

DOI:10.1038/s41746-022-00602-z
PMID:35538215
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9090910/
Abstract

Racial and ethnic minorities have borne a particularly acute burden of the COVID-19 pandemic in the United States. There is a growing awareness from both researchers and public health leaders of the critical need to ensure fairness in forecast results. Without careful and deliberate bias mitigation, inequities embedded in data can be transferred to model predictions, perpetuating disparities, and exacerbating the disproportionate harms of the COVID-19 pandemic. These biases in data and forecasts can be viewed through both statistical and sociological lenses, and the challenges of both building hierarchical models with limited data availability and drawing on data that reflects structural inequities must be confronted. We present an outline of key modeling domains in which unfairness may be introduced and draw on our experience building and testing the Google-Harvard COVID-19 Public Forecasting model to illustrate these challenges and offer strategies to address them. While targeted toward pandemic forecasting, these domains of potentially biased modeling and concurrent approaches to pursuing fairness present important considerations for equitable machine-learning innovation.

摘要

在美国,少数族裔在新冠疫情中承受了尤为沉重的负担。研究人员和公共卫生领导人越来越意识到,确保预测结果的公平性至关重要。如果不谨慎且刻意地减轻偏差,数据中固有的不平等就会被转移到模型预测中,使差距持续存在,并加剧新冠疫情造成的不成比例的危害。数据和预测中的这些偏差可以从统计学和社会学的角度来看待,必须面对在数据可用性有限的情况下构建分层模型以及利用反映结构性不平等的数据所面临的挑战。我们概述了可能引入不公平性的关键建模领域,并借鉴我们构建和测试谷歌-哈佛新冠疫情公共预测模型的经验,来说明这些挑战并提供应对策略。虽然这些内容是针对疫情预测的,但这些可能存在偏差的建模领域以及追求公平性的并行方法,为公平的机器学习创新提供了重要的思考方向。

相似文献

1
Algorithmic fairness in pandemic forecasting: lessons from COVID-19.疫情预测中的算法公平性:来自新冠疫情的教训
NPJ Digit Med. 2022 May 10;5(1):59. doi: 10.1038/s41746-022-00602-z.
2
Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review.临床机器学习模型中种族偏见的评估与缓解:范围综述
JMIR Med Inform. 2022 May 31;10(5):e36388. doi: 10.2196/36388.
3
Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment.迈向医学图像分析人工智能的公平性:识别并减轻从数据收集到模型部署路线图中的潜在偏差
J Med Imaging (Bellingham). 2023 Nov;10(6):061104. doi: 10.1117/1.JMI.10.6.061104. Epub 2023 Apr 26.
4
How New Mexico Leveraged a COVID-19 Case Forecasting Model to Preemptively Address the Health Care Needs of the State: Quantitative Analysis.新墨西哥州如何利用新冠疫情预测模型来预先满足该州的医疗保健需求:定量分析
JMIR Public Health Surveill. 2021 Jun 9;7(6):e27888. doi: 10.2196/27888.
5
A step toward building a unified framework for managing AI bias.迈向构建统一的人工智能偏差管理框架的一步。
PeerJ Comput Sci. 2023 Oct 26;9:e1630. doi: 10.7717/peerj-cs.1630. eCollection 2023.
6
A COVID-19 Pandemic Artificial Intelligence-Based System With Deep Learning Forecasting and Automatic Statistical Data Acquisition: Development and Implementation Study.一种基于人工智能的新冠肺炎大流行深度学习预测与自动统计数据采集系统:开发与实施研究
J Med Internet Res. 2021 May 20;23(5):e27806. doi: 10.2196/27806.
7
Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19.翘曲速度的偏见:人工智能如何在 COVID-19 时代加剧差异鸿沟。
J Am Med Inform Assoc. 2021 Jan 15;28(1):190-192. doi: 10.1093/jamia/ocaa210.
8
Algorithmic fairness in computational medicine.计算医学中的算法公平性。
EBioMedicine. 2022 Oct;84:104250. doi: 10.1016/j.ebiom.2022.104250. Epub 2022 Sep 6.
9
Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities.可预见的不平等:理解并解决有关算法临床预测可能加剧健康差异的担忧。
NPJ Digit Med. 2020 Jul 30;3:99. doi: 10.1038/s41746-020-0304-9. eCollection 2020.
10
Forecasting the COVID-19 Pandemic: Lessons learned and future directions.预测2019冠状病毒病大流行:经验教训与未来方向。
medRxiv. 2021 Nov 9:2021.11.06.21266007. doi: 10.1101/2021.11.06.21266007.

引用本文的文献

1
Disparate Model Performance and Stability in Machine Learning Clinical Support for Diabetes and Heart Diseases.机器学习在糖尿病和心脏病临床支持中的不同模型性能与稳定性
AMIA Jt Summits Transl Sci Proc. 2025 Jun 10;2025:95-104. eCollection 2025.
2
Auditing the fairness of the US COVID-19 forecast hub's case prediction models.审核美国新冠疫情预测中心病例预测模型的公正性。
PLoS One. 2025 Apr 22;20(4):e0319383. doi: 10.1371/journal.pone.0319383. eCollection 2025.
3
Examining inclusivity: the use of AI and diverse populations in health and social care: a systematic review.审视包容性:人工智能在健康与社会护理中的应用及不同人群:一项系统综述。
BMC Med Inform Decis Mak. 2025 Feb 5;25(1):57. doi: 10.1186/s12911-025-02884-1.
4
FAIM: Fairness-aware interpretable modeling for trustworthy machine learning in healthcare.FAIM:用于医疗保健领域可信机器学习的公平感知可解释建模。
Patterns (N Y). 2024 Sep 12;5(10):101059. doi: 10.1016/j.patter.2024.101059. eCollection 2024 Oct 11.
5
Early and fair COVID-19 outcome risk assessment using robust feature selection.使用稳健的特征选择进行 COVID-19 早期和公平的预后风险评估。
Sci Rep. 2023 Nov 3;13(1):18981. doi: 10.1038/s41598-023-36175-4.

本文引用的文献

1
Electronic Health Records as Biased Tools or Tools Against Bias: A Conceptual Model.电子健康记录:偏见工具还是反偏见工具?一个概念模型。
Milbank Q. 2022 Mar;100(1):134-150. doi: 10.1111/1468-0009.12545. Epub 2021 Nov 23.
2
A prospective evaluation of AI-augmented epidemiology to forecast COVID-19 in the USA and Japan.一项关于人工智能辅助流行病学对美国和日本新冠疫情进行预测的前瞻性评估。
NPJ Digit Med. 2021 Oct 8;4(1):146. doi: 10.1038/s41746-021-00511-7.
3
Reductions in 2020 US life expectancy due to COVID-19 and the disproportionate impact on the Black and Latino populations.2020 年美国因 COVID-19 导致的预期寿命下降,以及对黑人和拉丁裔人口的不成比例影响。
Proc Natl Acad Sci U S A. 2021 Feb 2;118(5). doi: 10.1073/pnas.2014746118.
4
Racial Bias in Pulse Oximetry Measurement.脉搏血氧饱和度测量中的种族偏见。
N Engl J Med. 2020 Dec 17;383(25):2477-2478. doi: 10.1056/NEJMc2029240.
5
How Structural Racism Works - Racist Policies as a Root Cause of U.S. Racial Health Inequities.结构性种族主义如何起作用——种族主义政策是美国种族健康不平等的根源
N Engl J Med. 2021 Feb 25;384(8):768-773. doi: 10.1056/NEJMms2025396. Epub 2020 Dec 16.
6
Social Determinants Predict Outcomes in Data From a Multi-Ethnic Cohort of 20,899 Patients Investigated for COVID-19.社会决定因素可预测 20899 例接受 COVID-19 调查的多民族队列患者的数据结局。
Front Public Health. 2020 Nov 24;8:571364. doi: 10.3389/fpubh.2020.571364. eCollection 2020.
7
Racism, Not Race, Drives Inequity Across the COVID-19 Continuum.种族主义,而非种族,导致了新冠疫情全过程中的不平等现象。
JAMA Netw Open. 2020 Sep 1;3(9):e2019933. doi: 10.1001/jamanetworkopen.2020.19933.
8
Those designing healthcare algorithms must become actively anti-racist.设计医疗保健算法的人必须积极采取反种族主义措施。
Nat Med. 2020 Sep;26(9):1327-1328. doi: 10.1038/s41591-020-1020-3.
9
Hidden in Plain Sight - Reconsidering the Use of Race Correction in Clinical Algorithms.隐匿于众目睽睽之下——重新审视临床算法中种族校正的应用
N Engl J Med. 2020 Aug 27;383(9):874-882. doi: 10.1056/NEJMms2004740. Epub 2020 Jun 17.
10
Community-Level Factors Associated With Racial And Ethnic Disparities In COVID-19 Rates In Massachusetts.与马萨诸塞州 COVID-19 发病率的种族和民族差异相关的社区层面因素。
Health Aff (Millwood). 2020 Nov;39(11):1984-1992. doi: 10.1377/hlthaff.2020.01040. Epub 2020 Aug 27.