• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Estimating and Controlling for Equalized Odds via Sensitive Attribute Predictors.通过敏感属性预测器进行均衡赔率的估计与控制。
Adv Neural Inf Process Syst. 2023 Dec;36:37173-37192.
2
Evaluating and mitigating bias in machine learning models for cardiovascular disease prediction.评估和减轻心血管疾病预测机器学习模型中的偏差。
J Biomed Inform. 2023 Feb;138:104294. doi: 10.1016/j.jbi.2023.104294. Epub 2023 Jan 24.
3
Fair classification domain adaptation: A dual adversarial learning approach.公平分类领域适应:一种双对抗学习方法。
Front Big Data. 2023 Jan 4;5:1049565. doi: 10.3389/fdata.2022.1049565. eCollection 2022.
4
Achieve fairness without demographics for dermatological disease diagnosis.在不考虑人口统计学因素的情况下实现皮肤病诊断的公平性。
Med Image Anal. 2024 Jul;95:103188. doi: 10.1016/j.media.2024.103188. Epub 2024 May 3.
5
Learning Fair Representations via Distance Correlation Minimization.通过最小化距离相关性学习公平表示。
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2139-2152. doi: 10.1109/TNNLS.2022.3187165. Epub 2024 Feb 5.
6
Analyzing the Impact of Personalization on Fairness in Federated Learning for Healthcare.分析个性化对医疗保健联邦学习公平性的影响。
J Healthc Inform Res. 2024 Mar 23;8(2):181-205. doi: 10.1007/s41666-024-00164-7. eCollection 2024 Jun.
7
: counterfactual explanations for fairness.公平性的反事实解释
Mach Learn. 2023 Mar 28:1-32. doi: 10.1007/s10994-023-06319-8.
8
Interventional Fairness with Indirect Knowledge of Unobserved Protected Attributes.具有未观察到的受保护属性的间接知识的干预公平性。
Entropy (Basel). 2021 Nov 25;23(12):1571. doi: 10.3390/e23121571.
9
Ensuring generalized fairness in batch classification.确保批量分类中的广义公平性。
Sci Rep. 2023 Nov 2;13(1):18892. doi: 10.1038/s41598-023-45943-1.
10
Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use.公平与风险:关于保险公司可采用的群体公平定义的伦理论证。
Philos Technol. 2023;36(3):45. doi: 10.1007/s13347-023-00624-9. Epub 2023 Jun 19.

引用本文的文献

1
Pitfalls and Best Practices in Evaluation of AI Algorithmic Biases in Radiology.放射学中人工智能算法偏差评估的陷阱与最佳实践
Radiology. 2025 May;315(2):e241674. doi: 10.1148/radiol.241674.

本文引用的文献

1
AI recognition of patient race in medical imaging: a modelling study.人工智能识别医学影像中的患者种族:一项建模研究。
Lancet Digit Health. 2022 Jun;4(6):e406-e414. doi: 10.1016/S2589-7500(22)00063-2. Epub 2022 May 11.
2
Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations.人工智能算法应用于服务不足患者人群的胸部 X 光片时的漏诊偏倚。
Nat Med. 2021 Dec;27(12):2176-2182. doi: 10.1038/s41591-021-01595-0. Epub 2021 Dec 10.
3
Deep Learning to Improve Breast Cancer Detection on Screening Mammography.深度学习在提高筛查性乳房 X 光摄影乳腺癌检测中的应用。
Sci Rep. 2019 Aug 29;9(1):12495. doi: 10.1038/s41598-019-48995-4.
4
Fair Inference on Outcomes.对结果的合理推断。
Proc AAAI Conf Artif Intell. 2018 Feb;2018:1931-1940. Epub 2018 Apr 25.
5
On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products.机器学习的安全性:网络物理系统、决策科学和数据产品。
Big Data. 2017 Sep;5(3):246-255.
6
Advancing health care equity through improved data collection.通过改进数据收集促进医疗保健公平。
N Engl J Med. 2011 Jun 16;364(24):2276-7. doi: 10.1056/NEJMp1103069.

通过敏感属性预测器进行均衡赔率的估计与控制。

Estimating and Controlling for Equalized Odds via Sensitive Attribute Predictors.

作者信息

Bharti Beepul, Yi Paul, Sulam Jeremias

机构信息

Johns Hopkins University.

University of Maryland.

出版信息

Adv Neural Inf Process Syst. 2023 Dec;36:37173-37192.

PMID:38867889
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11167624/
Abstract

As the use of machine learning models in real world high-stakes decision settings continues to grow, it is highly important that we are able to audit and control for any potential fairness violations these models may exhibit towards certain groups. To do so, one naturally requires access to sensitive attributes, such as demographics, biological sex, or other potentially sensitive features that determine group membership. Unfortunately, in many settings, this information is often unavailable. In this work we study the well (EOD) definition of fairness. In a setting without sensitive attributes, we first provide tight and computable upper bounds for the EOD violation of a predictor. These bounds precisely reflect the worst possible EOD violation. Second, we demonstrate how one can provably control the worst-case EOD by a new post-processing correction method. Our results characterize when directly controlling for EOD with respect to the predicted sensitive attributes is - and when is not - optimal when it comes to controlling worst-case EOD. Our results hold under assumptions that are milder than previous works, and we illustrate these results with experiments on synthetic and real datasets.

摘要

随着机器学习模型在现实世界高风险决策场景中的应用不断增加,我们能够对这些模型可能对某些群体表现出的任何潜在公平性违规行为进行审计和控制变得至关重要。为此,自然需要获取敏感属性,如人口统计学特征、生物性别或其他决定群体成员身份的潜在敏感特征。不幸的是,在许多情况下,这些信息往往无法获得。在这项工作中,我们研究了公平性的平等机会差异(EOD)定义。在没有敏感属性的情况下,我们首先为预测器的EOD违规提供了紧密且可计算的上限。这些界限精确反映了可能出现的最严重的EOD违规情况。其次,我们展示了如何通过一种新的后处理校正方法来可证明地控制最坏情况的EOD。我们的结果刻画了在控制最坏情况的EOD时,直接针对预测的敏感属性控制EOD何时是最优的,何时不是最优的。我们的结果在比以前的工作更温和的假设下成立,并且我们通过对合成数据集和真实数据集的实验来说明这些结果。