• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

机器学习中的潜在偏差及其被健康保险公司利用的机会。

The Potential For Bias In Machine Learning And Opportunities For Health Insurers To Address It.

机构信息

Stephanie S. Gervasi, Independence Blue Cross, Philadelphia, Pennsylvania.

Irene Y. Chen (

出版信息

Health Aff (Millwood). 2022 Feb;41(2):212-218. doi: 10.1377/hlthaff.2021.01287.

DOI:10.1377/hlthaff.2021.01287
PMID:35130064
Abstract

As the use of machine learning algorithms in health care continues to expand, there are growing concerns about equity, fairness, and bias in the ways in which machine learning models are developed and used in clinical and business decisions. We present a guide to the data ecosystem used by health insurers to highlight where bias can arise along machine learning pipelines. We suggest mechanisms for identifying and dealing with bias and discuss challenges and opportunities to increase fairness through analytics in the health insurance industry.

摘要

随着机器学习算法在医疗保健领域的应用不断扩大,人们越来越关注机器学习模型在临床和商业决策中的开发和使用方式是否公平、公正和存在偏见。我们提供了一份使用健康保险公司数据生态系统的指南,以突出机器学习管道中可能出现偏见的地方。我们建议了一些识别和处理偏见的机制,并讨论了通过健康保险行业的分析来提高公平性的挑战和机遇。

相似文献

1
The Potential For Bias In Machine Learning And Opportunities For Health Insurers To Address It.机器学习中的潜在偏差及其被健康保险公司利用的机会。
Health Aff (Millwood). 2022 Feb;41(2):212-218. doi: 10.1377/hlthaff.2021.01287.
2
Optimizing Equity: Working towards Fair Machine Learning Algorithms in Laboratory Medicine.优化公平性:致力于实现检验医学中公平的机器学习算法
J Appl Lab Med. 2023 Jan 4;8(1):113-128. doi: 10.1093/jalm/jfac085.
3
Assessment of differentially private synthetic data for utility and fairness in end-to-end machine learning pipelines for tabular data.用于表格数据的端到端机器学习管道中效用和公平性的差分隐私合成数据评估。
PLoS One. 2024 Feb 5;19(2):e0297271. doi: 10.1371/journal.pone.0297271. eCollection 2024.
4
Machine Learning and Bias in Medical Imaging: Opportunities and Challenges.机器学习与医学影像中的偏倚:机遇与挑战。
Circ Cardiovasc Imaging. 2024 Feb;17(2):e015495. doi: 10.1161/CIRCIMAGING.123.015495. Epub 2024 Feb 20.
5
Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities.可预见的不平等:理解并解决有关算法临床预测可能加剧健康差异的担忧。
NPJ Digit Med. 2020 Jul 30;3:99. doi: 10.1038/s41746-020-0304-9. eCollection 2020.
6
Risks and Opportunities to Ensure Equity in the Application of Big Data Research in Public Health.确保大数据研究在公共卫生中的应用公平性所面临的风险与机遇。
Annu Rev Public Health. 2022 Apr 5;43:59-78. doi: 10.1146/annurev-publhealth-051920-110928. Epub 2021 Dec 6.
7
On the relationship between research parasites and fairness in machine learning: challenges and opportunities.论机器学习中研究寄生虫与公平性的关系:挑战与机遇。
Gigascience. 2021 Dec 20;10(12). doi: 10.1093/gigascience/giab086.
8
Application of machine learning algorithms to identify cryptic reproductive habitats using diverse information sources.应用机器学习算法,利用多种信息源识别隐匿的生殖生境。
Oecologia. 2020 Oct;194(1-2):283-298. doi: 10.1007/s00442-020-04753-2. Epub 2020 Oct 1.
9
Can medical algorithms be fair? Three ethical quandaries and one dilemma.医疗算法能做到公平吗?三个伦理困境和一个困境。
BMJ Health Care Inform. 2022 Apr;29(1). doi: 10.1136/bmjhci-2021-100445.
10
Fairness and bias correction in machine learning for depression prediction across four study populations.机器学习在预测抑郁症中的公平性和偏差校正——跨越四个研究人群。
Sci Rep. 2024 Apr 3;14(1):7848. doi: 10.1038/s41598-024-58427-7.

引用本文的文献

1
Bias in vital signs? Machine learning models can learn patients' race or ethnicity from the values of vital signs alone.生命体征中的偏见?机器学习模型仅通过生命体征值就能了解患者的种族或民族。
BMJ Health Care Inform. 2025 Jul 10;32(1):e101098. doi: 10.1136/bmjhci-2024-101098.
2
Predictive models for low birth weight: a comparative analysis of algorithmic fairness-improving approaches.低出生体重预测模型:算法公平性改进方法的比较分析
Am J Manag Care. 2025 May 1;31(5):e132-e137. doi: 10.37765/ajmc.2025.89737.
3
Investigation on potential bias factors in histopathology datasets.
组织病理学数据集中潜在偏差因素的调查。
Sci Rep. 2025 Apr 2;15(1):11349. doi: 10.1038/s41598-025-89210-x.
4
Comparison of 1-year mortality predictions from vendor-supplied academic model for cancer patients.供应商提供的癌症患者学术模型对1年死亡率预测的比较。
PeerJ. 2025 Feb 11;13:e18958. doi: 10.7717/peerj.18958. eCollection 2025.
5
Improving medical machine learning models with generative balancing for equity and excellence.通过生成式平衡提升医学机器学习模型,以实现公平与卓越。
NPJ Digit Med. 2025 Feb 14;8(1):100. doi: 10.1038/s41746-025-01438-z.
6
Artificial intelligence in global health: An unfair future for health in Sub-Saharan Africa?全球卫生领域的人工智能:撒哈拉以南非洲地区卫生事业的不公平未来?
Health Aff Sch. 2025 Feb 5;3(2):qxaf023. doi: 10.1093/haschl/qxaf023. eCollection 2025 Feb.
7
Fairness in Low Birthweight Predictive Models: Implications of Excluding Race/Ethnicity.低出生体重预测模型中的公平性:排除种族/族裔的影响
J Racial Ethn Health Disparities. 2025 Jan 29. doi: 10.1007/s40615-025-02296-x.
8
Generative Artificial Intelligence for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations: An ISPOR Working Group Report.用于卫生技术评估的生成式人工智能:机遇、挑战及政策考量:一份ISPOR工作组报告
Value Health. 2025 Feb;28(2):175-183. doi: 10.1016/j.jval.2024.10.3846. Epub 2024 Nov 12.
9
Artificial Intelligence in Medical Affairs: A New Paradigm with Novel Opportunities.人工智能在医疗事务中的应用:一种具有新机遇的新模式。
Pharmaceut Med. 2024 Sep;38(5):331-342. doi: 10.1007/s40290-024-00536-9. Epub 2024 Sep 11.
10
Generative Artificial Intelligence and Large Language Models in Primary Care Medical Education.生成式人工智能和大语言模型在初级保健医学教育中的应用。
Fam Med. 2024 Oct;56(9):534-540. doi: 10.22454/FamMed.2024.775525. Epub 2024 Aug 8.