• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能相关医疗保健歧视的新出现的危险。

The Emerging Hazard of AI-Related Health Care Discrimination.

出版信息

Hastings Cent Rep. 2021 Jan;51(1):8-9. doi: 10.1002/hast.1203. Epub 2020 Dec 14.

DOI:10.1002/hast.1203
PMID:33315263
Abstract

Artificial intelligence holds great promise for improved health-care outcomes. But it also poses substantial new hazards, including algorithmic discrimination. For example, an algorithm used to identify candidates for beneficial "high risk care management" programs routinely failed to select racial minorities. Furthermore, some algorithms deliberately adjust for race in ways that divert resources away from minority patients. To illustrate, algorithms have underestimated African Americans' risks of kidney stones and death from heart failure. Algorithmic discrimination can violate Title VI of the Civil Rights Act and Section 1557 of the Affordable Care Act when it unjustifiably disadvantages underserved populations. This article urges that both legal and technical tools be deployed to promote AI fairness. Plaintiffs should be able to assert disparate impact claims in health-care litigation, and Congress should enact an Algorithmic Accountability Act. In addition, fairness should be a key element in designing, implementing, validating, and employing AI.

摘要

人工智能有望改善医疗保健结果。但它也带来了新的重大风险,包括算法歧视。例如,用于识别有益的“高风险护理管理”计划候选人的算法经常未能选择少数民族。此外,一些算法故意以将资源从少数族裔患者转移走的方式调整种族。例如,算法低估了非裔美国人患肾结石和心力衰竭死亡的风险。当算法不合理地使服务不足的人群处于不利地位时,算法歧视可能会违反《民权法案》第六篇和《平价医疗法案》第 1557 条。本文敦促部署法律和技术工具来促进人工智能公平性。在医疗保健诉讼中,原告应该能够提出差异影响索赔,国会应该颁布《算法问责法》。此外,公平性应该是设计、实施、验证和使用人工智能的关键要素。

相似文献

1
The Emerging Hazard of AI-Related Health Care Discrimination.人工智能相关医疗保健歧视的新出现的危险。
Hastings Cent Rep. 2021 Jan;51(1):8-9. doi: 10.1002/hast.1203. Epub 2020 Dec 14.
2
Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions.意料之外的不平等:人工智能在医疗决策中的差异性影响。
J Law Health. 2021;34(2):215-251.
3
U.S. civil rights policy and access to health care by minority Americans: implications for a changing health care system.美国民权政策与美国少数族裔的医疗保健可及性:对不断变化的医疗保健系统的影响
Med Care Res Rev. 2000;57 Suppl 1:236-59. doi: 10.1177/1077558700057001S11.
4
Civil rights in a changing health care system.
Health Aff (Millwood). 1997 Jan-Feb;16(1):90-105. doi: 10.1377/hlthaff.16.1.90.
5
Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health.呼吁算法公平性以减轻在口腔正畸学和颅面健康中使用的人工智能模型中种族偏见的放大。
Orthod Craniofac Res. 2023 Dec;26 Suppl 1:124-130. doi: 10.1111/ocr.12721. Epub 2023 Oct 17.
6
Challenges To Reducing Discrimination And Health Inequity Through Existing Civil Rights Laws.通过现有民权法减少歧视和健康不平等的挑战。
Health Aff (Millwood). 2017 Jun 1;36(6):1041-1047. doi: 10.1377/hlthaff.2016.1091.
7
Recommendations to promote fairness and inclusion in biomedical AI research and clinical use.促进生物医学人工智能研究和临床应用公平性和包容性的建议。
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
8
"Sorry I Didn't Hear You." The Ethics of Voice Computing and AI in High Risk Mental Health Populations.
AJOB Neurosci. 2020 Apr-Jun;11(2):105-112. doi: 10.1080/21507740.2020.1740355.
9
Nondiscrimination in Health Programs and Activities. Final rule.健康项目与活动中的非歧视。最终规则。
Fed Regist. 2016 May 18;81(96):31375-473.
10
Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health.解决全球卫生领域中人工智能和机器学习的公平性、偏见及合理使用问题。
Front Artif Intell. 2021 Apr 15;3:561802. doi: 10.3389/frai.2020.561802. eCollection 2020.

引用本文的文献

1
Representation of intensivists' race/ethnicity, sex, and age by artificial intelligence: a cross-sectional study of two text-to-image models.人工智能对重症监护医师种族/民族、性别和年龄的代表性:对两个文本到图像模型的横断面研究。
Crit Care. 2024 Nov 11;28(1):363. doi: 10.1186/s13054-024-05134-4.
2
Recognising and managing bias and prejudice in healthcare.认识并应对医疗保健中的偏见与歧视。
BJA Educ. 2024 Jul;24(7):245-253. doi: 10.1016/j.bjae.2024.03.006. Epub 2024 Apr 25.
3
Workplace health surveillance and COVID-19: algorithmic health discrimination and cancer survivors.
工作场所健康监测与 COVID-19:算法健康歧视与癌症幸存者。
J Cancer Surviv. 2022 Feb;16(1):200-212. doi: 10.1007/s11764-021-01144-1. Epub 2022 Feb 2.